The exposure tools your camera already has (and why they lie)
False colour, zebras, waveform: what each one is actually measuring and where they disagree. And why zone-based exposure tools changed everything.
You’ve been told to “expose to the right.” You’ve been told to “protect your highlights.” You’ve been told that false colour is the best exposure tool, or that the waveform is, or that the histogram is. Everyone has an opinion, and the opinions contradict each other.
Here’s the thing: they’re all measuring different things. And they’re all lying to you, slightly, in different ways.
What zebras actually show you
Zebras are a threshold display. They show you where pixel values exceed a number you set. That’s it. They don’t know what the scene looks like. They don’t know what’s important. They draw stripes on anything that’s bright enough, whether it’s a specular highlight you’re happy to clip or a face that’s about to lose detail.
The number they compare against depends on what signal they’re looking at. If your camera outputs Log-C to the monitor, the zebra threshold is in log code values. If it outputs a Rec. 709 conversion, the threshold is in display-referred values. Same scene, different zebra pattern. Neither is wrong. They’re measuring different things.
The common gotcha: you set your zebras at 70% because someone told you that’s correct skin exposure. On a 709 output, that might be true. On a Log-C output, 70% is nowhere near where skin sits. Log curves compress highlights, so the code value for a face is much lower than you’d expect. You’re measuring the right number in the wrong domain.
What false colour actually shows you
False colour maps brightness ranges to colours, typically blue for shadows, green for midtones, pink for skin, red for clipping. It’s a more nuanced tool than zebras because it shows you the entire exposure distribution at once.
But false colour has the same domain problem. Is it operating on the log signal or the display-referred signal? A face that reads as “correct skin exposure” in one domain might read differently in another. And the colour mapping itself varies between manufacturers. ARRI’s false colour isn’t the same as SmallHD’s.
This is the part that trips up experienced operators. You learn one false colour scale, you internalise it, and then you switch to a different monitor or a different camera and the colours don’t mean what they used to. The tool looks the same. The readings have shifted. Nothing in the interface tells you why.
The waveform is better, but not enough
The waveform monitor is the closest thing you have to an honest exposure tool. It plots every pixel’s brightness vertically, spread across the horizontal axis of the frame, so you can see the full distribution of the image at a glance. Shadows at the bottom, highlights at the top, midtones where they fall.
It has two advantages over zebras and false colour: it shows you everything at once rather than just flagging a threshold, and it gives you a sense of spatial distribution. You can see which part of the frame is bright and which is dark.
But the waveform is still domain-dependent. A waveform displaying a log signal has a compressed shape compared to the same scene displayed in 709. Skin that sits at 42% on a Log-C waveform sits at 55–60% on a 709 waveform. If you’re reading log levels with 709 instincts, you’ll consistently underexpose by a stop or more. You’ll think you’re being careful. You’ll be throwing away shadow detail.
The curve above shows why. Log encoding redistributes code values so that shadows get more room and highlights get less. On a linear or display-referred scale, midtones sit near the middle. On a log scale, they sit lower. Same light, different number. The waveform faithfully shows you whatever signal it receives, but it doesn’t tell you which signal that is.
Why zone-based tools are different
Zone-based exposure tools, inspired by Ansel Adams’ zone system, divide the exposure range into perceptually even steps and show you where things fall. The key insight is that they’re anchored to the scene, not the encoding.
Adams divided the world into eleven zones, from pure black to pure white, with each zone representing one stop of light. Zone V was middle grey. A face was Zone VI. Bright clouds were Zone VII or VIII. The system was designed around the physics of light, not the characteristics of any particular film stock.
Modern zone-based tools apply the same logic to digital cinema. They map the camera’s sensor data into perceptual zones, actual stops of light above and below middle grey, so that a face always reads as a face, regardless of whether you’re monitoring in Log-C, S-Log3, or Rec. 709. The display domain becomes irrelevant because the tool has already translated the signal into scene-referred stops.
This is a genuine step forward. You’re no longer reading a number that means different things depending on your monitoring chain. You’re reading a measurement that corresponds to the light in front of the lens. If something reads as two stops over middle grey, it’s two stops over middle grey, on any camera, in any log format, through any monitoring path.
False colour shifts with your monitoring path. Zone-based tools anchor to the scene.
The practical upshot
You don’t need to throw away your waveform or your false colour. They’re useful tools. But you need to know which domain they’re operating in, and you need to compensate for it. If you’re monitoring in log, learn where skin sits on a log waveform instead of guessing from 709 experience. If you’re using false colour, know whose false colour scale you’re reading and what signal it’s mapped to.
And if you have access to a zone-based exposure tool, use it. Not because it’s fancier, but because it removes an entire category of error. You stop asking “what code value is this pixel?” and start asking “how many stops of light is this?” The first question depends on your encoding. The second question depends on the scene. The scene is what you’re trying to photograph.