ETTR - a technical perspective

“Expose To The Right” (ETTR) is a recommendation initiated by Michael Reichmann in 2003. The theory dictates that images should be over-exposed in camera then adjusted back down in processing to maximize detail in the shadows. This principle is directed toward purists wishing to achieve the utmost in image quality.

But does this theory hold true, and a worthwhile pursuit with the DSLR’s of today? I have seen many articles promoting the use, many with bold claims but none with evidence that satisfies my doubts. From a lengthy background of electronic system maintenance I have struggled to find logical reasoning that may prove these claims as truth.

First off - a disclaimer. Although I have 20 + years of intimate experience in the electronic system arena I am in no way positioning myself as an expert on this subject. Rather, I offer a viewpoint from someone coming from a world entrenched in logic and harboring a passion for photography.

I’d like to base further discussion on the following facts and beliefs regarding the camera sensor, although containing millions of light-gathering cells we will concentrate on an individual as all are similar in operation;

  • electronic sensors in general generate a random low level noise component (inherent noise) that is an added part of the output (with zero input, output = noise)

  • a single cell accumulates and coverts the light that strikes it to an analog voltage output - that output is directly proportional to light accumulation (+ random inherent noise)

  • cells do not have ‘sensitivity’; any adjustment to ISO or exposure compensation simply adds a multiplication factor to the OUTPUT of the cell either before or during conversion to a digital signal. It is important to note that the output of a cell is CONSTANT (with addition of inherent noise) for a given light capture.

As inherent noise is fairly constant it can be said that the level of sensor/cell noise becomes more significant in low light (let’s use a very simple signal/noise (SNR) ratio of 10:1 for this example), than in cases of bright light (let’s use a SNR of 1000:1). This is often compared to the output of audio speakers with the volume turned up - with a very low input signal noise is clearly evident, but diminishes in significance as input signal rises.

A theory that prevails with ETTR is that by increasing the exposure of an image when captured the signal/noise ratio is increased, thus providing better detail (less noise) in the shadows (exposure is adjusted in post processing to return shadows to what we’d expect).

While on the surface this may make sense it really depends on how this is achieved;

  • ISO adjustment, By increasing the ISO we increase the “sensitivity” and thus with shutter speed & aperture being equal we have increased signal/exposure. However, as we have already said that ISO simply applies signal amplification then the noise component is increased by the same degree. This provides a net gain of zero. This may explain why noise is increasingly evident when increasing ISO in low light situations

  • Speed adjustment. By slowing down shutter speed we allow the sensor/cells to accumulate more light and create a higher output. This does increase exposure but unfortunately the noise component is also being accumulated at a similar rate, and once again the net effect is zero. This might explain the evidence of noise on very long exposures.

  • Aperture adjustment. As we open our lens to allow more light to the sensor we minimize time to gather light for an increased exposure. With ISO & time as constants this seems to be the ONLY true option for increasing exposure without increasing the noise component.


From all I can determine ETTR is based on a solid argument but perhaps falls short. I agree that an increase of exposure will show less noise, but NOT if this is accomplished by ISO and/or shutter speed changes. However, I DO see the logic for gain is when exposure increase is achieved by APERTURE adjustment - by allowing more light to hit the sensor (ISO/shutter being equal) we increase the SNR, thus decreasing the level of noise as the exposure is normalized.

The downside of following this theory needs to be stated - however achieved the photographer runs the risk of losing data in the whites, and how ever achieved the mechanism of doing so (ISO/f stop/shutter speed) can have far greater impact on composition than any (debatable) improvement in shadow detail.

With the expanded dynamic range of modern cameras I would further suggest that any gains that may have been perceived in 2003 have largely diminished. The modern day photographer would be best suited to ensure lighting is adequate (to maximize SNR) that a full data range is reported by the histogram - otherwise consider exposure bracketing to achieve that goal.

This is my understanding - I would love to hear any other arguments out there that may provide logic and/or evidence that I may have overlooked.