Nobody "delogs" footage anymore (and what we do instead)
We used to drag a LUT onto the timeline and call it done. Scene-referred colour management changed everything. Here's what actually happens now, and why your old quicktime-gamma-fix.cube is a museum piece.
There was a time, not that long ago, when dealing with log footage meant finding the right LUT, dragging it onto your timeline, and hoping for the best. If the image still looked wrong, you’d try a different LUT. If QuickTime Player showed it differently from your NLE, you’d add quicktime-gamma-fix.cube to the chain and move on.
We called it “delogging.” As if the log encoding was a mistake to be corrected rather than a deliberate engineering decision.
What log actually is
A cinema sensor captures somewhere around 14 stops of dynamic range. If you encoded that linearly, mapping physical light values straight to code values, almost all your bits would go to the highlights. The bottom six stops of shadow detail would be crammed into a handful of values, with visible banding. The same perceptual problem that made CRT gamma a lucky accident for 8-bit video.
Log encoding solves this by distributing code values more evenly across the perceptual range. Shadows get enough bits. Highlights get enough bits. The midtones sit where you’d expect. The footage looks flat because it’s not meant to be viewed directly. It’s a storage format optimised for grading, not for looking at.
From “delog” to scene-referred
The old model was display-referred: get the footage to look right on this monitor, right now. The modern model is scene-referred: represent the original scene light, and apply the viewing transform as a separate, swappable step.
ACES formalised this. OCIO made it configurable. The key shift: the log curve isn’t something you “undo.” It’s one of many transforms in a pipeline that traces from camera sensor through working space through display. Each step is explicit. Each step is reversible. None of them is “fix the footage.”
Why Log-C3 ≠ Log-C4 (and why it matters)
ARRI updated their log curve from Log-C3 to Log-C4. Different cut point, different slope parameters, different constants in the same parametric equation. If your pipeline assumes Log-C3 and the footage is Log-C4, everything shifts by a fraction of a stop. Not enough to look obviously broken. Enough to throw off your exposure evaluation.
This is exactly the kind of error that the old “delog and hope” workflow couldn’t catch. In a scene-referred pipeline, the interpretation chain resolves which curve to use based on camera metadata, and if it had to guess, it tells you.
The old LUT is a museum piece
Your quicktime-gamma-fix.cube solved a real problem at the time. QuickTime and professional NLEs handled gamma differently, and the LUT papered over the disagreement. But it was a band-aid on a display-referred workflow, a transform that assumed a specific viewing condition and broke if anything in the chain changed.
Modern colour management doesn’t need it because the viewing transform is a documented, versioned, swappable step in the pipeline, not a mystery LUT taped to the side of the timeline.
The footage isn’t broken. The log isn’t a problem to fix. It’s an encoding to interpret, and your tools should be clear about how they’re interpreting it.