There are a few reasons for taking pictures. Firstly: because one wants a record of a time, place or thing. Into this category falls everything from snapshots to incredible works of art.
Secondly: one wants to convey a message. Again, this runs the gamut from snapshots (my child is cute) to intense statements of intended meaning (Piss Christ).
Thirdly (and it can encompass both of aforementioned), to sell.
It’s the last I’m going to deal with now, and specifically with the teasers for easy sales; known, by and large, as “Royalty Free Stock”. Right off the bat, making a living with stock photography is difficult. It’s work. Those who do it make it a career, devoting time, effort, talent and training to it. It’s “passive” income only in that the successful stock photographer can spend most of her time making photos, rather than flogging her work.
Why? Because there is someone else being paid to do it. Either an agency (the more common method), or an employee. Agencies are more common because there’s a lot of work in managing a stock portfolio. Things have to be categorized, cross referenced, described, priced (on which more in a moment) and sales have to be recorded.
Clients have to be solicited, and billed and chivvied to return slides.
It’s work, which is why the agency takes a hefty cut. Odds are, if you have enough images to make an employee useful, they will cost as much as the agency did. There are photographers who (usually with a spouse) manage their own stock, but that’s time they spend doing business, not shooting.
What’s a good price for a stock image? That, as they say, depends. Is it a one time use? How long is it going to be tied up (most customers don’t want to have a dozen other people using the image they just paid for, so they will ask for a piece to be removed from availability for awhile. That’s time you aren’t able to sell it). What’s the market?
A shot being sold to someone using it in a full-page ad for Time Magazine will be charged more than someone asking to use it as an illo for the alumni annual. One, they have the budget, two, it’s a bigger publication. If they hired someone to take the photo they’d be paying a pretty penny. Well, they’ve decided you took a photo as good as the one they could hire; they should expect to pay the going rate.
All of which an agency has practice doing, and explaining. It’s why they take 40-60 percent of the price. But you don’t have to do that figuring; don’t have to send the images, the invoices, the reminders, etc. They do that, and they send you a check.
They also tend to specialize, which means people who want pictures of birds, landscapes, machine parts, cars, people walking on the beach, kayaking, mountain climbing, picnicking, etc., know which agencies to call. If you want to do your own marketing (and keep all the money), you will have to find customers, and convince them you have what they need.
It also means having a lot of pictures. Initial submissions to agencies are in the 100-250 range. After that they tend to want new pictures on a steady basis. Because old customers want new pictures, and new customers may not want things which have been used often.
Not that having an agency removes all need for record keeping. You have to keep track of what you have with which agent(s). It’s not just bad form to submit to multiple agencies; it’s usually a contractual violation. You will need releases for models, and sometimes for property (“The Bean” in Chicago is copyrighted, it’s theoretically impossible to sell an image of it without the City of Chicago’s permission. Lots of places have tried the same thing. It is, at best, a questionable legal theory (because the image is what’s copyrighted, not the thing, so when you take an image, you get the copyright to it. The arguments by companies are basically, “it’s been done; all a new photograph does is repeat something already in copyright”. As I understand it (I am not a lawyer) this is a specious argument, but the people who make it tend to have money, which can be used to chilling effect).
So what of, “Royalty Free Stock Agencies” (which is a misnomer; stock isn’t sold on a royalty basis, but on a use basis)? They are, from the photographer’s point of view, a scam. You upload images, and for a flat fee anyone can download an image, and use it for whatever they want. The photographer gets between $.25 and $1.50 for it.
The alumni annual and the company which wants to have a full page ad in Time for a month, makes no real difference to the photographer, she gets her pittance, and they get to use it. It may be the one month run has to pay four times, but that’s still nothing, when compared to what a real stock agency would get (the alumni mag might be as little as $20 to the photographer, Time might be as much as $80,000; that’s a pretty big gap).
The other question is payout. The agency will pay quarterly. It’s possible they will have a minimum, but if/when you cancel the contract, they will pay in full. The “royalty free” places tend to pay when a threshold is reached. The better ones pay at the $25 level, but most of the ones I’ve looked at set the bar at $100. That’s a lot of $.25 sales. Some of them pay 30-90 days after the threshold is reached. Once the balance falls below the minimum, one has to wait until its more than the minimum again to get paid.
If you decided to call it quits, and cash out, they often have a processing fee for cutting the check. That’s 10-25 dollars raked of the top before you see a penny.
Which is great for them. The odds of any single photographer getting across the thresholds are high (because it takes a lot of people buying a lot of uses to get to even the lower thresholds). Most people don’t have that many photos which are in the right sorts of categories (or they don’t have model releases, etc.). But the client is paying.
That’s cash in hand (because the photos aren’t being sold for 50 cents, they are selling for between $1.00 and I don’t know what. The best I recall was a sliding scale, in which one got more money for more sales in a given period. Reading between the lines I’m guessing they are charging the customer 5-25 dollars a use, and passing on $.35-2.00 to the artist.
So they have a revenue stream and a largish pot of money they are supposed to be holding in escrow. That’s a free float of the combined money’s interest. Probably also a pretty penny for the going concerns (and who knows how many of these are under-capitalised/ill-run, and prone to bankruptcy ... who then will pay the outstanding accounts?).
If you want to do stock, do it right; get an agency, or several.
Filters used to be part and parcel of doing photography. Color correction was a big deal. It’s still a big deal. Wratten filters to adjust for lights, and lighting. When one goes deep into a canyon to shoot a waterfall, the light’s not “white”. The mind’s eye automatically corrects. The camera can correct too.
So we don’t carry 81A, or 81B, no more magenta to attempt correction for the green shifts in fluorescent lights. The kit is smaller, and lighter and that’s pretty good. If you make a mistake (shooting cloudy for sun, or flash for tungsten) you can correct in Lightroom, or Photoshop, or LightZone; whatever your preference happens to be.
This doesn’t mean filters are dead. Forget the “effects” filters, such as stars, and rainbows, those are what they are, and if they are useful to you, there is no replacement. But other, of the, “basic” filters from days of yore, are still as useful as they ever were.
Why? What is it they provide which the applications don’t?
Let’s look under the hood, again. Camera’s trap light. How they trap light is the difference between one and another. Back in the “old days” the big difference was the lenses, and accessories. The medium in which the light was caught was the same from camera to camera. Film was the medium, and it was continuous. 
When the process of photography was first discovered the films weren’t sensitive to the full spectrum of light. As time went on, the chemistry of emulsions was improved, from orthographic, to panchromatic (adding green and red, respectively), until the entire spectrum was there. The layering of the emulsions meant every square nanometer was sensitive to all the light.
Digital cameras are not continuous. Each pixel is sensitive to one spectrum of light, red, green, or blue (unless you have an Olympus, but it’s not relevant to this). Part of the processing time from the shutter to “done writing” is the math to convert the pixel colors to visual colors
B&W has always been a bit different, yet again. Once the tricks of collecting the entire spectrum were solved, the depth and contrast of the entire scene could be captured (look at early photos, and part of the softness is the lenses, and some of it is because the emulsions couldn’t catch all the light). If a bit of light hits the emulsion, it activates a bit of silver. The more light, the more silver is activated, and the lighter that area will become.
To shoot B&W with digital the information is a lot more complicated. First a map of individual spots of RGB has to be made. Then it has to be converted to color, then that color map has to be reduced to a grayscale image. Instead of a direct relationship, we have an approximation.
But some colors are about as reflective as other colors. The raw quantity of light they reflect is about the same, which means they record much the same on the film. To fix that, to make one thing darker than the other, we use filters. Because a filter absorbs the light in the same spectrum a red filter will make red subjects lighter. It also has the effect of making the opposite side (i.e. green, with a red filter) of the spectrum darker (I think this is because it absorbs the lesser amount of light on from the opposite side of the spectrum being reflected. In this regard it parallels the effect of a polarising filter).
Will this work, when one takes all the steps involved into account? Will the effect be worth the trouble? You have to judge that for yourself. I shot a series of a tree stump, with a series of filters. Red, Green, Blue, Orange and Yellow Polarising. The Yellow Polarising I added to the other four.
The first shot was done in color, and converted to B&W in Lightzone. The only other thing done was a moderate sharpening. I did exactly the same thing to all the rest.
As shot, simple conversion.
With a Blue Filter
This one with an Orange.
There are differences. (it’s easier to see them if you open them in tabs, and cycle from tab to tab). If we look at the original image, we can see the oranges, browns, white, grays, greens, and blues. We can also see those areas in the grays of the monochrome images. The different filters restrict different wavelengths of light, which gives the variation in the final product.
If you want to do it, what do you need? Filters. Either the “ring” type, which screw into the front of the lens, or the “square” type, which slide into a holder (which is attached to the front of the lens in some way). I prefer the square type, because 1: They are cheaper (without any real difference in quality), and 2: most systems (I use Cokin) allow for “stacking” them, and 3: they allow for the use of “graduated” filters.
The first thing to remember is that, as with all filters, they steal light. In terms of metering this doesn’t mean much, but darker filters (such as red) will require either a faster ISO, or a tripod.
The second thing (which becomes obvious the moment you look through the lens) is that things look different.
Becomes this one
That’s what red, with a yellow polariser, looks like, without correction. The details are hidden. This is another of the reasons I like square filters. I can frame the scene before I put the filter in place. Focusing through dark filters is problematic. Either set the focus first, or trust the autofocus.
When you process it, it becomes:
All the detail is still there; even if it’s too compressed into the red spectrum for the human eye to resolve, the sensor (and the film) can.
The last thing to keep in mind is the meter. Autofocus won’t be usually be fooled by the filter, but the meter will usually underexpose the image. I don’t know why (logically the expectation would be the darker scene will be overexposed, but experience tells me this is not the case). It’s going to be some trial and error, because each meter is different, and there is no way to know how much it’s been fooled until you look at the conversion to black and white.
Continuous, when speaking of film, means the tones are evenly graded, from the blackest color the silver in the paper can produce, to the brightest white. Since the way in which the silver interferes with the clear flow of light allows for some refractive bending there are no sudden shifts. This is why photos (even color ones) photo-copy so badly; they are being converted from continuous medium, to a non-continuous one. It took the discovery of half-tone conversions to make it possible to use photographs in newspapers, which is why the US Civil War was illustrated with drawings.
There are a lot of tips, tricks and hints about photography. Some of them are so right they are truisms (a tripod will increase the sharpness of your photos). Some are so wrong their persistence defies my ability to understand (the camera never lies).
Which still leaves a some stuff which falls in the category of, It ain’t what you don’t know; it’s what you know that ain’t so that gets you in trouble. There are a lot of those. Most are, usually, harmless. The problems arise when the edge cases small failures actually come into play.
Take depth of field. The “rule” is that longer lenses have shallower depth of field. It ain’t so. Depth of Field is a function of ratio. At a given number of focal lengths the depth of field will be “x”. I’ll simplify the numbers. Take a 50mm lens. Assume that a focal distance of 10 x FL = 12”. So at 500mm from the front of the lens there will be a 12” Depth of Field. For a 100mm lens the same DoF won’t be gotten until 1000mm.
The apparent effect is, at a given distance the shorter lens has a greater DoF. Since it’s also got a wider field of view, it also seems to have more things in the details, which fools the mind into thinking the wider lens has more resolving power. If you take the same picture, from the same relative distances, and compare them, the DoF will be the same.
Related is the idea that a smaller aperture = greater sharpness. It’s mostly so. The smallest detail which will be resolved sharply is the same size as the diaphragm opening. If the iris is open to 6mm, items of that size will be sharp. The problem comes in when one tries to enlarge the image. There are a couple of things which affect the way the light behaves. Refraction is the way the glass bends the light to bring everything into focus.
Diffraction is the way light bends when it passes by something solid. Pinhole cameras use nothing but diffraction to focus the light. It’s not the sharpest image, but one can do it (one can test this by curling one’s index finger until only a small point of light comes through. Things will be sharper. It’s most dramatic if one is nearsighted. I can read this page, through my finger, at distances where nothing but a gray blur is visible without glasses).
But it’s blurry. If the lens is stopped down too far, diffraction causes the edges of lines to blur some. At smaller print sizes this isn’t a problem. There is a point at which the further resolving power of stopping down starts to fuzz edges. For most lenses this is about one-stop smaller than the middle of the range. It will vary, depending on the resolving power of the lens. Macro (and copy) lenses, will do better than most.
If you want to find out when the iris stops improving sharpness for a lens, get a page of print. Set it up at a reasonable distance, and focus on it. Take a series of shots, stopping down as you go. Blow the image up to a large size, and then compare all the exposures at that size. When the letter edges start to diffuse, that’s one stop smaller than the sharpest for your lens. Check out all the stops, see how much it degrades from the best aperture, to the last. Mostly this isn’t a big deal, but if you need maximal sharpness, you need to test the lenses. It’s not the camera which the iris affects, but the lens.
Some things are new. Digital imaging means some of the things we used to do, we don’t need to keep doing. A lot of filtering can be done in edit, instead of in camera. This is, by and large, to the good. Filters introduce more chances for things to go wrong. There are two surfaces to get dirty, or scratched. They might introduce optical problems. Errors in grinding (or casting if it’s optical resins) can create flaws in the image.
Which means we don’t need color correction at capture, and so we lose one more chance to have things go south. If you shoot raw you can fix mistakes (like tungsten for daylight) with a mouse-click. Polarizing, on the other hand, isn’t something which can be done with the computer. Some of the effects can (the increase in saturation) but things like reflection neutralization need to be done before the image is taken. Neutral Density filters are also things to keep in the bag. Best are really long graduated filters, and a holder system (a la Cokin), to let you rotate and slide them.
I will say one other thing about color filters: they are still useful when doing Black and White. You can choose to play with it after the fact, but doing it when the picture is taken is more effective. It’s a technique to use when you plan to render the image in monochrome, because neutralizing the color is hard (it can be done, but some of the data will be lost). Luckily, digital makes it easy to shoot the image twice. It’s time, not money, you have to spend
Thinking of buying a camera?
The first thing to consider when thinking of buying a camera is what you want to do with it. There’s a bit of difference in what different types/makes/models of camera can do. There’s also a big difference in what they cost. It has been ever thus.
The second thing to realise SLR isn’t always the answer. There’s a lot to be said for the SLR design, but it has its drawbacks. If you are thinking about something which is easy to carry about, can be dropped in a purse, pocket or knapsack, SLRs are not the camera for you
Finally, while a poor camera may not capture images well, a “really good” camera won’t really improve them. If you think you need some piece of equipment to take, “good” pictures, you’re wrong. It’s a common line of thinking; pretty much all of us have, at some point, thought that a new lens, or camera, or flash, was what we needed to fix some problem. Once one has a decent camera blaming the tools is counter-productive. Usually the fault lies in ourselves.
What are the concerns? Most commonly talked about today is the sensor density; described in mega-pixels. All things being equal this is a useful measurement. All things are never equal. If the sensors are the same size you can make some general assumptions about the ability of the sensor to more smoothly move from one color to another. It also ought to be better at dealing with contrast. After that things stop being so straightforward
The first thing to remember is each pixel can only record one color (unless you are looking at a Foveon sensor; the only company using them is Olympus, so we can, pretty much, ignore them for the moment). Each one needs to be filtered, so that only that color can be collected. They also (save for Fuji) only measure two things; the color and the brightness. Fuji has a sensor which has two receptors per pixel, and can measure “vibrance” as well as brightness. They say it gives smoother gradations of contrast, and more accurate color rendition.
Which means a 6 megapixel camera has 6 million different specks of Red, Green, and Blue. By doing some math, and making a grid of groups (say a selection of 5x5 pixels) the color, and intensity, of a spot can be figured out. If the grid is displaced a little, and the math is done again, the color can be mapped a little more accurately. How many times, and what assumptions are made about color will determine the color profile for a given camera.
Each manufacturer makes different assumptions about what the best color balance is. e.g. Nikon tends to be slightly blue.
You also have to think about how large the sensor is. There are some, “full-sized” sensors on the market. They aren’t cheap. Then come the “digital format” sizes, which are, roughly 25 percent smaller. After that, things get strange. A Coolpix sensor isn’t the same size as a SureShot, isn’t the same as an Elph, isn’t the same as, well you get the picture. All things aren’t equal.
If you want to make prints, larger sensors (not density of pixels) means you can make larger prints before the image starts to degrade. This is as true now as it was when film was the primary medium. That means most really small cameras aren’t ideal. That said, there aren’t any cameras out there which can’t make 4x6 prints of decent quality, which is fine for most applications. How many of us are looking to make 13x19 in. prints? Not many.
Cameras are a means for trapping light. The lens is how the light is corralled. So a good lens is crucial. SLRs have the advantage of interchangeable lenses. The lenses they use also have adjustable irises, which allows for controlling the amount of light, which gives another set of controls.
Compact cameras tend to not have interchangeable lenses. This requires compromises. The smaller sensors allow for smaller lenses. It also makes it easier to move the lens back and forth, which allows the camera to have different focal lengths, and so gives wide-angle, and telephoto options to most of them. This is usually referred to as “optical zoom”. Depending on the maker a camera may come with the option of “supplemental lenses”. These make it possible to take closer, or farther, images. They are, basically, a filter, with all the advantages; and drawbacks, thereto.
Some cameras (and not just cheaper ones, the Nikon D2X is in this category) have a different means of adjusting the image, referred to as “digital zoom”. Mostly, it’s a worthless gimmick. It works by decreasing the number of pixels being used. To get the “longer” focal length the image is cropped in the camera. You can do the same thing in your editing program. The image is not going to be any more (or less) enlargeable if you do it in PhotoShop, but you will get to choose what to crop (the Nikon does the cropping as a side effect. To get a faster frame rate Nikon is reducing the file size. Since it takes a given amount of time to process information, the only way to speed things up was to reduce the amount of information).
That’s the skinny on the actual image. Once you’ve made a decision on the style of camera (really small, compact, SLR), the thing to do is test them. Assuming all the cameras you are thinking of getting use the same recording medium, go and buy a disk. Then go to some reputable dealers and use your disk to take some pictures. Make sure you know which camera was used for which shots. Try to take similar photos (since most places won’t let you take the camera outdoors this probably isn’t much of a problem). Then go home and compare the pictures. The LCD on the back of the camera is useless for this. You need to be able to compare them in an equivalent medium. Then monitor on your computer (calibrated or not) will let you do that.
Harder to measure are the features. What’s the sensor response time (i.e. how long does it take the sensor to be ready to take the picture. Cheaper cameras can have notable lag times between pressing the release, and the actual capture of the image)? Does it have external controls? Are they easy for you to use? Are the menus you want right on top? Does it keep settings when the batteries are out? Does it have batteries which are easy to replace? Does it drain the batteries quickly? Does it have a viewfinder?
That last is a personal peeve of mine. I am not comfortable with LCD screens as the means of composing the image. They are hard to see in bright light, and they wobble and jiggle. The image I see is a little behind the actual image on the sensor. I find that looking into the viewfinder, and excluding most of the world, save what I am looking at, makes it easier to keep in mind the picture I am trying to take. There are cameras which have a combination of both. Panasonic makes one with an LCD panel, and a viewfinder, which looks onto an LCD. It’s not bad, but the jerky movie effect of it is still not quite what I like.
The rangefinder style is fine. It can cause some parallax problems when shooting up close, but that’s not too hard to deal with. Odds are, if you have such a viewfinder, you won’t be shooting a whole lot of pictures which suffer from the problem. If you do some practice will teach you how to adjust for it.
When it comes to controls I am not a huge fan of purely menu driven systems. Why? Because they require learning a complex set of step. This morning I wanted to us the interval shooting setting on my camera. I lost about 10 minutes of light, while the flower was opening, because the method wasn’t plain to me. I’ve used that setting before, but I forget, in between uses, just what the pattern of buttons and decisions is. Every mistake I made meant starting over.
If manual focus is an option, it ought to be a button, or a switch, on the outside of the camera. White balance, and ISO also ought to be some sort of external control (button, wheel, switch, or some combination). My dSLR has a switch for the modes of autofocus, another one for metering modes (spot, matrix and center weighted). Pressing a button, and turning a wheel lets me adjust ISO, white balance and the file format I’m using to record images.
I like that. There is a vast selection of things I can do from the menu. I can program some of the external buttons, I can set frame rate (and how many frames I can shoot in a row, before I have to take my finger off the button) That’s swell. but those are things which take time. If I need to kill the AF, I need to be able to do it without stopping and finding it, three levels deep in the right set of menus.
To be good with a camera it has to be something you can use without having to stop and think about how to make work. The more time it takes to perform regular tasks, the harder I think it’s going to be to get to that level of familiarity.
The tests in the store won’t tell you that you can get to that level of reflex with a given camera, but it can make it plain you won’t.
If you decide you want an SLR, you have some really important decisions to make. An SLR isn’t just a camera, it’s part of a manufacturers “system”. Canon lenses don’t work on Nikon bodies. Nikon flash units don’t communicate with Olympus bodies. Fuji does use Nikon mounts, so Nikon glass works on them. You can use third party lenses (I am fond of Tamron), but the body you choose will determine a lot of the accessories you get later.
Pretty soon you are locked into it. After you buy a $600 camera body, a couple of $300 lenses and a $200 flash, you aren’t likely to be willing to repeat the process. That makes it a lot more important to get what you want.
I’m a Nikon user. I got started on them, and when I moved to digital I already had a huge investment in the system. Some of it didn’t work as well (My flash unit was a SunPak 400, the adaptor to use it with the D2H was 1: expensive and 2: didn’t give me all the features of the SB800, so I bit the bullet and added more gear), but none of it was useless. If I’d bought a Canon... all of the lenses, bellows, extension tubes: everything which attaches to the body, would have had to be duplicated.
That would have been fine for Canon but a little bit foolish for me.
If you don’t already have an investment, the trick is pretty much the same. Figure out what you want to do. If you aren’t shooting sports, you don’t need camera which shoots 8fps. If you are going to be doing macro, find out who makes the better lenses, and get the body which goes with it.
If the lenses you want are provided by a third party, then what light do you need? Who makes the flash which has the most control of light? After the question of glass, flash is the largest consideration when considering a system; which flash system lets you combine the units best; to your needs?
After that you think about things like sensor size, pixel density and color balance, because those will change. The film is in the camera now (like a Box Brownie), and when you get a new camera those things will change. Some of it will very from body to body in the same model.
Don’t let me, or anyone else, persuade you that you needa Nikon, Canon, Hassleblad, etc. You are the photographer, not them, and when all is said and done When all is said and done, the camera you can use comfortably, and which produces images you enjoy looking at, is the one you want.
”Workflow” is one of those terms of art which infect photography (some of which, like dLog are fading as the number of photographers who’ve never heard of a densitometer rises). It happens the basic idea is probably not much newer than reproducible images. At its root workflow is how one moves from latent image (in the form of a negative, or an image capture, to a finished product.
There are a couple of sorts of workflow, which basically depend on what one is doing with the photos. For the casual photographer it’s pretty straightforward. When one moves to the serious amateur, or professional, it becomes more pressing.
The first question is file format in the camera. I personally recommend shooting .RAW, if possible. If your camera won’t do that, buy a larger piece of memory, and shoot .tif. This has the downside of requiring more storage space, because the files are larger, on the other hand, .jpgs are data poor. They have uses, but primary files are not one of them.
First, download the camera. When you do this you want to store them so you can find them two years from now when you don’t really recall what they are. Me, I have the date of download as the folder name, and each download, be it five frames, or a full disk (about 300 frames, from a 2 gig disk) is it's own folder. I find, by and large, a single batch of images is, generally, coherent; unto itself. Happily I can pretty much recall when I shot something. This system works because my editing is done with application I can use to look in the folders and see thumbnails. I commend this as well
Having come up with a system for keeping track of the files, I also think you want to keep them all together. A freestanding hard drive is cheap (a terabyte isn’t more than a few hundred dollars. 300 gigs can be had for a bit more than 100). Keeping one’s files is worth the money needed to store them. I’d also recommend a second hard drive, so you can back them up. Again, not losing the images, memories and suchlike is worth a little bit of money and time (because loading gigs of photos from one drive to another isn’t instantaneous). If you like you can arrange to have the freestanding drive as the primary download point, or you can start them in a folder on your computer and move them later. Don’t forget to regularly dupe them to the backup drive(s).
Open the folder, find the photos you want to work on, and open them in the application of choice. What you do when you save them is dependant on your end result. For printing, .tifs, at 300 dpi are what I use. If you want to email them to people, .jpg is the way to go. Mostly this is because .tifs are huge files. The 300 dpi. print files I make are in the 20-30 megabyte range; that’s coming from 4meg .RAW files. In practical terms, a resolution of 180 dpi is fine. It’s a decent compromise for onscreen viewing, and printing. Make sure, if you’ve made changes, to “save as” so the original image isn’t overwritten. That file is your negative. If you overwrite it all the potential in the original file is lost.
If uploading to a website (Flickr, Photobucket, ImageHack, etc.) resolution will change how it looks. The web has a limit of 72 dpi. Go to a greater resolution and the default is t make the image larger. This is fine, if the idea is for people to be able to print it, but if viewing online is the idea it can degrade the image. Play with it and see how things look. Then decide what you want to do. What I decided was to aim for a largest side of 800 pixels, and a resolution of not more than 125 dpi; for online images.
That covers a lot of the basics, relating to size. In most ways it also covers how to deal with single images. When one moves up to playing with a lot of images things get a little more complicated. Again, establishing a system helps a lot. Being able to look at all the images you want to play with at once is handy, because it lets you see if images in need of correction have the same sorts of problem.
This is where one’s choice of editing program starts to matter. Most of them allow for batch functions. The easier such functions are the better for workflow. Photoshop has “actions”, LightZone has, “styles”. I never bothered to learn how to do actions in Photoshop. Styles in LightZone are easy.
Take a representative photo. Make the corrections which give the general look you want. Save them. Open one more image. Apply the style. If that works, then select all the images you want to apply the style to, and click the button. It will apply the style, and then save the images as you wanted (format, resolution, batch name and final location). Open them with a picture viewing app and see if that did the job.
If you’re shooting weddings, or senior portraits, etc., this is really nice; because a problem which was missed at the shoot (a piece of mixed light, or a trifle overexposed, etc.) can be fixed on all the images, in a fraction of the time required to fix each one by hand.
This is where the professional (or serious amateur) encounters the problem of dichotomy. Unless you can make the time for the customer to come in and look at things. In the old days that was the way things went. No longer. A website, et voila at their leisure the images can be looked at. (I have a bride, who is looking at her fourth anniversary. She has prints, but the enlargements still haven’t been ordered. She’s just too busy to come in and go over them with me; had she gotten married a year later, this wouldn’t be the case. Right now it’s not a big deal. She’s not unhappy; I got prints to her in a couple of weeks, even with some problems at the printer [some hair got onto a couple of negatives; unacceptable], but she doesn’t have the album friends and family want to look at. When she decides she really wants those larger prints, she’ll make the time, and I’ll make the money).
Now you have a problem. How to present the images. Wedding photographers’ workflow issues are worse, because a lot of them insist on making promotional packages; in the hope this will inspire the couple to plump for more expensive things, which often means making slideshows, and the like. It’s a mixed blessing. The extra work required delays getting the pictures where the client can see them, which takes the bloom of the rose a bit. I know couples who are still waiting, more than a year after the wedding to see all the pictures taken by the photographer they paid good money to do the job. Uncle Joe and Aunt Millie have become, de facto the families provider of memories.
Which is why I favor a quick pass in the editor. Takes a couple of hours. The garbage gets tossed, the marginal gets set aside and the good images get a quick massage. Then open the batch converter, select all the preliminary images and turn them into .jpgs. If your preferred app doesn’t have a batch function... buy one that does. The hassle of learning how use one more program is more than balanced by not having to convert all the images to .jpg one, by one, by one.
The last step (in client work) is one of not putting things off. Right now, you probably have nothing pressing on the plate. Make the effort to spend the time going over all the photos with a fine-tooth comb. Sort them into groups; by needed correction. (you can play with them for editorial effect later). Where possible make batch corrections. Where needed open them up and make the individual ones. I like to open a word-processor and make notes about what mistakes I made. That way I will be less likely to make them the next time I encounter such problems). If you put it off, waiting until the client says, “I want this one, and this one and this one, and can I see the one’s you didn’t post to the website?” you will end up being the sort of photographer who pulls all-nighters; with the concomitant failure to do one’s best, or the sort who has clients waiting years for photos from the event. Neither of those is what you want.
For “art” photography things aren’t quite so pressured. I tend to work on things which relate to each other. This isn’t a batchwork deal. Even when doing lots of shots of the same basic thing, the problems aren’t the sort which lend to brute force. I might be able to apply an “action” or a “style” but I have to see the large image to make up my mind. After I edit them, I save as a 300 dpi .tif, and store it in a “pending’ folder.
When the folder has enough; which is a rule of thumb deal... it looks full and I start to work on it. I resize to web-resolution. One of the things which tends to be more common in my nature photography is cropping. “Art” shots are often very carefully composed in the viewfinder. The same for landscapes and portraits. Event shots are usually composed pretty much full frame.
In real terms, crop is immaterial. The crop is what I want to show the world, and the final print size is going to be pretty much paper dependant. If the ratio makes a lot of white space, top and bottom, that’s part of the image. But the screen is a fixed resolution. The way the web shows a thing at higher resolutions is to make it larger. For a couple of reasons I don’t want to do that. One, it’s easier for someone to decide to take a copy and print it out, large. Two, it’s harder to appreciate at that size on the screen. If it’s compressed there can be odd edge-effects. If it’s too large the screen can’t show all of it.
So I set an arbitrary number, one which is large enough to show the detail, but small enough to be seen on most people’s screen. 800 pixels across is the target. For my sensor that’s 97 dpi on a full frame image. If the crop is really strong (say for a 4x4 sq image) I’ll go up to 125, and settle for a smaller image (about 620 pixels) on the screen. I’ve done enough that I can get really close on the first try. When I save that (as a copy) it goes to a folder for uploading. When I’ve done with that it goes in a folder with every .jpg I’ve ever converted.
My personal opinion on workflow is get started as soon as a batch of images is pulled off the camera. Spend a two to three hours at a time. Set up the images so you can feel you’ve completed what you were working on in those hours. The most important part of it is to keep the entire process fun. Planning the photos and taking them is usually going to be fun (if it isn’t, you need to think about changing something, a new area, or a long vacation doing something else with the camera, etc.), but the back-side can be tedious. If that happens you start avoiding it. This is bad for the professional, it’s terrible for the amateur. As a professional you have to go to the editor; the client, or the landlord, mandate it.
As an amateur, no matter how much you love taking pictures, having thousands to go through can be daunting. Again, just set aside a couple of hours and play with them. You can come back later, and work on the rest.
Finally, a word on naming. No one likes to title a piece _AD45901_98, it’s ugly, it’s dull and it says nothing about the photo. It is, however, a useful thing, because it relates the photo to other photos taken at the same time. When I title an image, the file has the camera number included; after the title. When I publish it, I clip that off the name. When someone says they want to buy a copy of, “X”, they are referring to the .jpg they saw. I need to get to the .tif, which isn’t a big deal, if I’ve not misplaced it (it’s taken a few years, and a few mistakes to get the system I have now). If I’ve misplaced it, I go that .jpg folder, and find the image, by name. Then I go to one of the storage disks and search by file number. That will get me to the edit file, which is stored with the .RAW file. From there I can remake a .tif version.
I learned this the hard way. I started keeping the camera source name after I decided I wanted to print something, and couldn’t find it. It adds a small amount of work when uploading to the web, other than that it matters not. I also keep the .exif data, so I can check to see when a photo was taken; if I want to go back and look at other things from the same time frame.
So that’s my thoughts on workflow. In a nutshell it’s all about figuring out the easiest (not always the fastest) way to take the image from camera to final output. What works for you is what works for you. The rest is all nibbling at the edges