Digital Analogs

Posted on March 13, 2016 by psu

2001 is the year when mass market digital photography began. This was the first year both Nikon (with the D1x) and Canon (with a EOS-1d) introduced DSLR cameras that for the most part handled and performed essentially the same as their film counterparts. Still, it was only a relatively small audience of professional photojournalists and well-off technical nerds who had the money to buy and use these cameras. Fifteen years ago most serious photographers (still and motion) were still capturing their light pictures on to small pieces of coated plastic. When the D1 came out I remember thinking that a decent digital camera that I’d bring on a trip instead of my Nikons and Tri-X was about five to ten years away … but closer to ten. At the time I even had a darkroom in my house, pieces of which I still own. I was pretty good at doing my own film and prints, and was skeptical of the digital hardware.

But digital moved quickly. By 2004 even a relatively reluctant dilettante like me had bought two digital bodies and would basically never shoot film again. By 2009 Kodachrome was gone and Fuji had stopped making many of its most famous color slide films (and many black and white films too). These days there are probably high school kids who have never seen a piece of film, and might not even know why the word is used as a synonym for movies.

What brings me repeat these small bits of history is a stupid article I read just after Christmas about the relationship between film and digital photography. The article in question is here: http://time.com/4003527/future-of-photography/. The paragraph that made me instantly stop reading the piece was this:

Digital capture quietly but definitively severed the optical connection with reality, that physical relationship between the object photographed and the image that differentiated lens-made imagery and defined our understanding of photography for 160 years. The digital sensor replaced to optical record of light with a computational process that substitutes a calculated reconstruction using only one third of the available photons. That’s right, two thirds of the digital image is interpolated by the processor in the conversion from RAW to JPG or TIF. It’s reality but not as we know it.

I highlight this passage because it has at its core the kind of wrong-headed nostalgia that colors so much of the discussion about film and digital photography. I think this text is saying basically two things:

  1. Film, for whatever reason, is a more direct and realistic representation of whatever scene you are pointing that camera at than digital. The notion here is that the image in the negative (or slide) has a direct physical relationship with the scene being captured. This is also usually when the word “analog” is used to describe film, as in “film is analog, just like the real world”. Never mind that at the scale that a film emulsion works (silver atoms) the world is ruled by quantum mechanics, which is weirder than any shit you can imagine.

  2. Digital images are inherently manipulated because software is involved. The notion here is that digital images never really exist, they are created by the processing software even to the point of containing information that the computer “made up”.

The conclusion that the piece then wants you to draw is that digital pictures, since they are inherently manipulated at capture time (and tangentially, because they are so easily manipulated in post-processing), are less real than pictures captured on film and printed in a darkroom.

This is, of course, hogwash.

Let us review how film works.

  1. Coat a long piece and thin piece of plastic with super thin emulsion made up of layers of gelatin and silver-based chemicals called silver halides.

  2. When light hits the emulsion a chemical change is induced in the grains of silver halide. The exact nature of this change is not completely understood, but when exposed grains are then dunked in a chemical bath made up of basic salts, they will cause that part of the film to become darker. You then need to run the film through a few more chemical baths to produce a permanent negative black and white image of the scene that you saw in the camera. To see a real black and white picture you reverse this negative again by projecting it on to another emulsion coated on paper and developing that.

    Modern black and white films also incorporate various tricks to make the emulsion sensitive to the full spectrum of light rather than just blue-ish light.

  3. To make the picture a positive again, you project an image of it (usually enlarged) on to a second piece of emulsion, this one coated on paper. Usually to get the entire range of tones from the negative to the paper you have to do a non-trivial amount of manipulation because the exposure range for the paper is not as long as that of film.

  4. If the film is a color film, then the emulsion and development are even more complicated. First, you need multiple layers of emulsion, one for each primary color that you want to capture. Second, color films incorporate dye couplers either into the film itself or in the chemical baths used to develop them. These couplers react with the oxidized developer sitting in each layer of the emulsion to create dyes in the primary colors used to create the final color image. Since denser areas of the negative will have more oxidized developer near them you get a higher density of dyes in those areas. This is how the color “knows” where to go in the picture. The silver is then washed away and you get either a negative or positive color picture made up of the multi-layer dye sandwich still trapped in the gelatin.

    The details of how color films are formulated and how the color works are in general complicated enough to make up multiple graduate level textbooks in applied chemistry. The exact details probably died with Kodak (but still live on at Fuji).

Digital imaging systems work on principles that are both simpler and more complicated. Digital sensors are pretty complicated. But once you have the data getting a picture out is comparatively simple.

  1. A sensor is a semiconductor chip made up of millions of imaging sites. Each site is a piece of light sensitive electronics, like a well. Photons fall into this well and a voltage leaks out the bottom (I simplify for space, and because I don’t really know how it works).

  2. Like basic film emulsions, imaging chips are black and white devices, so to make color images we out a filter array in top of the imaging sites. This array is a series of red (R), green (G) and blue (B) filters arranged in some known pattern. When we read the data out of the sensor we get a series of numbers and we keep track of which numbers came from each kind of site. Some software then reconstructs the color using relatively well known algorithms plus a lot of proprietary engineering experience to get a nice looking picture out.

  3. The imaging pipeline in an average digital camera has a range of possible in-camera post-processing options. Some (like the iPhone) do a lot, others (like more standard SLRs) do comparatively little. Most will manipulate the raw data to deal with various issues around color balance, contrast, noise, etc. Exactly what happens is mostly proprietary.

So, what have we learned? The truth is that both film and digital images are the result of a huge number of indirect steps between the light hitting your lens and you seeing a picture on the other side. It is in no way true that a film picture reflects any more of a “direct” connection with the original scene. What you get in the film picture depends on at least 15 different kinds of proprietary processing any of which could change at any time, turning your pictures into something completely different. In general this did not happen because for the most part the photo chemical companies had a vested interest in making pictures look consistent and real.

But the digital imaging companies have exactly the same interest. Photography has an important cultural role in reporting the visual facts of our lives. As such, it will always be important for the default end result of any of the magic above to be something that looks like a picture of what was in front of the camera. Then it’s up to the photographer to work out how much she is willing to manipulate and change the image. In practice this can be a difficult process, especially since digital imaging systems give you so many opportunities for manipulation. But I claim that ultimately the credibility of the picture is tied more to the person making it than the tools that are involved. Just like it always was.

I will end this by pontificating about the hatred I have for the word analog in this context. In modern usage it appears that any time anyone wants to refer to a process which is not digital, that process by default is called “analog”. So we have analog photography, analog sound recording, analog sound reproduction, and so forth. If car people were as stupid in the same ways as everyone else, the old mechanical cars would be called analog in contrast to modern vehicles that are mostly driven by computers.

But this word is a terrible way to describe the difference between traditional and digital photography. It’s terrible because both processes make an analog of the original scene. Each system takes a series of millions of little quantum mechanical interactions between light and a substrate and turns them back into an image. Both processes sample the scene and then both use science and what the laymen would call a bit of magic to reconstruct a “whole” image. Film does this with magic chemicals that tells the color where to go. Digital does this with CCDs and some clever software. While these paths are pretty different, the end result is the same: something that reminds you of an interesting thing you saw at some point in your life that you wanted to be able to see again. So you hit the button, and took the picture.

Editorial Note: I was going to include a section about image reconstruction, how humans perceive images, and why the interpolation needed to create color images isn’t really that big a deal. But, I think that would take a whole other article that I’m not qualified to write in order to get it right. In the end that would have been too much work just to troll either the idiots who seem to think that their precious images have so much detail that no compression could possibly maintain it, or the related idiots who think that their hearing, at 55, is so acute that they can actually hear a huge difference between 256K AAC and a WAV file. The answer in each case is: you are wrong, nerd. So I’ll leave it at that.

Second Editorial Note: In 2021 I updated the short discussion of dye couplers to be more acurate after I read more about how they work.