What Is Dynamic Range
/Hey folks. This time I want to discuss a topic that is occasionally misunderstood and sometimes misrepresented and that topic is Dynamic Range.
For the purposes of this conversation, we will focus on the dynamic range capability of your camera, specifically its sensor. While dynamic range can and will enter conversation in other elements of photography, in the words of Julie Andrews, let’s start at the very beginning, a very good place to start.
Camera Sensor Dynamic Range - Simplified
All camera sensors record light, and the range of light between what the sensor can record as the least amount of light to what the sensor can record as the maximum amount of light is called the sensor dynamic range. While the range is properly expressed in EV (exposure value) detaching it from camera exposure controls, most vendors and analysts refer to the dynamic range in stops, where one stop is one EV.
Most makers recognize the marketing value of stating a sensor’s dynamic range and as a generality, more dynamic range is better than less.
Where Does This Come From
The human eyeball in its Mark 1 Mod 0 release (what we have in our heads) is a pretty amazing entity. According to ophthalmic professionals, a human eye typically can see 20 EV of dynamic range, meaning twenty full stop differences in light gathering power. This is quite awesome, although it is unlikely that anyone can see all 20 EV at the same time, due to the necessary control effects of pupil dilation and contraction. Nonetheless, our eye sees more dynamic range than the top commercial sensors available so far. While we might be inclined to match pupil changes to aperture, it’s actually more akin to ISO, but that is another discussion.
When we were shooting film, we could benefit from a medium capable of recording up to 6.5 EV of dynamic range in a single image. This engendered what we know as photography, however most folks understood that what the film recorded did not match precisely what their eye would see, particularly in the very bright and very dark areas.
When digital sensors were first developed, they were based on an electronic device known as a CCD or Charge Coupled Device. The first round of commercial CCD sensors could record between 6 EV and 7 EV of dynamic range and a very film like appearance. The convenience of storing more images electronically and without the cost of film and processing heralded the evolution of photography to electronic storage from chemistry and physical media.
CCDs however excellent brought their own challenges, which meant greater noise at even median ISO values, as well as a tendency to get hot, and high energy consumption. A few years in, we saw the evolutionary replacement of CCD technology in sensors to that of CMOS (Complementary Metal Oxide Silicon) which had greater power efficiency, better digital noise control and the potential for more dynamic range.
Those first CMOS sensors offered 8 EV of dynamic range, 2 full stops more than a really good CCD sensor, so the race was on to replace CCD and to drive more dynamic range capability from CMOS sensors.
Today we find sensors in the 13.5 - 14.5 EV dynamic range capability. This is twice the dynamic range of film, so more data available to be used in creating the image.
What Does Creating the Image Mean?
Cameras record sensor data as an ordered matrix of bit values. The camera maker takes these bit values and each then processes them by a company proprietary algorithm to produce what we call a RAW file. RAW implies untouched, but all RAW files are processed. They contain metadata, typically a small JPEG preview of an interpreted JPEG conversion, and are also image processed to achieve a look common to the manufacturer. You may have read about the “Nikon look” or “Canon look”. This is all in how the original bits are processed to create the RAW file. A RAW file is still not an image per se, but has a defined structure that a RAW converter can use to produce a viewable image.
RAW files contain all the data (we hope) from the recording of the image, but this is highly dependent on the processing algorithm. A few years ago, Sony cameras earned the moniker of star-killer because their algorithm was designed to reduce noise by dropping over bright bits, which in a number of cases ended up removing stars in astrophotograpy. This is an example of how RAW is not truly “raw”. That said., RAW delivers the most data possible and if you want to have maximum flexibility in your image processing, this is one of the reasons why one would shoot RAW.
In the context of maximizing dynamic range, if you want maximum dynamic range, do not shoot in JPEG. The file format is a lossy compression mechanism built to throw away data to keep file sizes small. In so doing, dynamic range gets tossed early and is non-recoverable.
Why Do I Care
The simple and logical reason for caring about dynamic range is that you spent thousands of dollars on your camera and want the greatest amount of processing flexibility natively. Indeed you get more useful information out of a single 12 EV dynamic range RAW than you ever did out of a seven shot HDR taken with a 6 EV sensor. This is one of the reasons why HDR has fallen off in discussion and demand recently, the old ways of doing HDR photography have no use with modern sensors. I will discuss that in a future article.
Having all that native dynamic range in the file allows the RAW converter, presuming it is a decent RAW converter, to convert the file to an image display for manipulation. If we use Adobe Lightroom Classic as an example, the Adobe RAW Converter creates a DNG iteration internally that you can see and work with. DNG is simply a median state in RAW conversion, although you can save a file in this format. We can also view native RAW files through other RAW converters for display. There are myriad versions and they are all somewhat different as defined by the writers of the algorithms.
More data means more agility in the blacks, whites, shadows and highlights. Dynamic range is a luminosity scale, not a colour scale. In fact for those of us who learned the 11 stop Zone System, our digital files today may have more dynamic range than what the Zone System specified. This does not make the Zone System useless but it does make using it in processing a different methodology, like HDR but in a different way.
Conclusions
Allow me to be blunt. If you are shooting for Instagram, Facebook or any other similar service that is going to massively compress your uploads, the dynamic range of your sensor is not going to mean much. These services only take highly compressed JPEGs as uploads and then compress them further. I suppose users should be grateful that their images don’t look like cat puke. When we consider that these images are most often viewed on tiny phone screens or in computer web browsers on displays of very limited pixels per inch, we should not be surprised if the image looked ok at 72ppi. However if you are printing yourself or with a service provider that can handle 16 bit TIFF files., dynamic range is going to make an enormous difference in your final output. Make your own decision about how much this matters to you. If you are a serious photographer looking for more than social media likes, dynamic range is going to be of great importance to you.
Do you have an idea for an article, tutorial, video or podcast? Do you have an imaging question unrelated to this article? Send me an email directly at ross@thephotovideoguy.ca or post in the comments. When you email your questions on any imaging topic, I will try to respond within a day.
If you shop with B&H Photo Video, please consider doing so through the link on thephotovideoguy.ca as this helps support my efforts and has no negative impact whatsoever on your shopping experience.
If you find the podcast, videos or articles of value, consider clicking the Donation tab in the sidebar of the website and buy me a coffee. Your donation goes to help me keep things going.
I'm Ross Chevalier, thanks for reading, watching and listening and until next time, peace.