Digital Photography and Post Processing

Digital Photography and Post Processing

Red Barn in the Orchard
White pear blossoms surround a red barn at the foot of Mt Hood.

If you shoot jpegs with your camera or smartphone the little computer in the device post processes the image into something it thinks is nice.  That version of the image is no more real or honest than any other version created from the RAW image data.  It is all interpretation.  When you work directly from the RAW file itself on the computer the resulting image is the artists interpretation, not the computer in the camera’s.  The same set of digital data can be turned into a color image, a monochrome or sepia image or ‘filtered’ and manipulated into something that has little resemblance to the original scene.

My goal as an artist is to show as closely as possible what I saw, what I felt, what I experienced when I made the image.  Sometimes that requires no more effort than posting the image to my website.  Sometimes it requires hours or even days of post processing work in Lightroom and Photoshop.  I reject the idea that taking a picture and then working on it on the computer is in any way altering what I saw, what I felt. Though I have photographer friends that still insist that “straight out of camera” is somehow more pure or honest.  I think that working a RAW image back to what I saw rather than what the camera saw is the more accurate image.  I believe in producing an image that reflects my vision at the time it was made.  That I think is the essence of art.

We have to consider the limitations of the cameras we use and the conditions under which they are used.  Take a snapshot with your phone in great daylight and the image will be a close representation of what you see.  But come back pre-dawn and try to take an image of Mt Hood lit by sunrise and still include enough foreground detail for a good image.  The phone, and indeed even most expensive DSLR cameras, are not going to produce an image that looks anything like what you saw.  The human eye is still capable of things that the camera is not.  When you look up at the mountain your pupils narrow as there is a lot of light in the sky.  Look down at the dark foreground and your pupils open up to let in more light.  This happens so fast you never notice.  But the camera cannot do that.  Everything in the scene is exposed at the same level.  And that means to recreate what your eye saw we need to go beyond what the camera saw in that single frame exposure.

According to many sources the human eye can discern 20 to 24 stops of dynamic range.  That is the difference between pure black and pure white in a scene.  But only because the pupils are constantly opening and closing to adjust the amount of light we actually see.  A good DSLR camera can discern between 10 and 12 stops of dynamic range.  So what do we do when a scene has say 16 stops of dynamic range between darkest and lightest parts?  The camera cannot record that properly so we have to fall back on combining images on the computer to produce a result that looks the same as what our eyes saw in the first place.

If I was a photojournalist I would feel obligated to modify the image as little as possible from what the camera recorded.  Indeed there are rules and standards for this in the photojournalism world.  But I’m an artist.  I use a camera and the way light plays off the natural world to show my vision of what I am seeing.  Sometimes that is just the way the little computer in the camera sees it as well.  But most times my eye and the camera’s computer do not see the same thing.

Comments are closed.