Everything I wish someone had told me about timecode
Timecode is just a number on a clock. It's also the single most important number in post-production, and the source of some of the most expensive mistakes you can make.
Timecode is hours, minutes, seconds, frames. Four numbers separated by colons. It tells you where you are in a recording. Simple.
Except it’s also how editorial syncs picture to sound. How the conform reconnects online media to the offline cut. How VFX shots get matched back to camera originals. How the mix lines up dialogue, foley, and music. It’s the number that holds the entire post-production pipeline together, and when it’s wrong, even by one frame, everything downstream shifts.
The clock that isn’t a clock
Timecode looks like a clock but it doesn’t measure time. It counts frames. At 24fps, timecode advances 24 units per second. At 25fps, 25 units. The “seconds” in timecode aren’t real seconds; they’re groups of frames.
This is fine until you hit drop-frame timecode at 29.97fps, where the frame count periodically skips numbers to keep the timecode display roughly aligned with wall-clock time. Drop-frame doesn’t drop frames. It drops frame numbers. The footage is continuous. The counter just skips ahead occasionally.
If you’ve ever had a one-frame sync drift that accumulated over the length of a programme, it was probably a drop-frame/non-drop-frame mismatch somewhere in the chain.
Drop-frame vs non-drop-frame: the expensive difference
Here’s the arithmetic. NTSC video runs at 29.97 frames per second, not 30. If your timecode counts exactly 30 frames per second (non-drop-frame), then after one minute of real time, your timecode reads 00:01:00:00 but you’ve only played 29.97 × 60 = 1798.2 frames, not 1800. You’re 1.8 frames ahead. After ten minutes, you’re 18 frames ahead. After an hour, you’re 108 frames ahead, nearly four seconds of drift between your timecode display and wall-clock time.
Drop-frame timecode compensates by skipping frame numbers 00 and 01 at the start of each minute, except every tenth minute. It’s an ugly hack, and the semicolons in the display (01:00:00;00 vs 01:00:00:00) are the only visual cue that you’re looking at drop-frame. But it keeps the timecode aligned with the clock, which matters when you’re delivering to broadcast with strict programme durations.
The danger: if you conform a show with drop-frame timecode in a sequence set to non-drop-frame (or vice versa), every cut after the first minute boundary will be off. Not by much at first. By the end of a 48-minute episode, you could be looking at cuts that are two to three seconds from where they should be. Re-conforming an entire episode because of a drop-frame mismatch is a real thing that happens to real people with real deadlines.
When two clocks disagree
The camera has a timecode clock. The sound recorder has a timecode clock. They’re supposed to be synchronised via jam-sync or a shared timecode generator. But clocks drift.
Jam-sync works by connecting the recorder to the camera (or both to a master clock) at the start of the day and telling the recorder to match. From that moment on, the recorder’s internal crystal oscillator runs independently. A good crystal drifts about half a frame per eight hours. A bad one can drift a frame per hour. You won’t see the drift during the take. You’ll see it in the edit when the sync slowly slides across a long interview.
Genlock is more robust. It locks the devices to a continuous shared reference signal rather than a one-time sync. On a multicam studio shoot, everything is genlocked and drift isn’t a concern. On a documentary location shoot, you jam-sync at call time and hope your crystals are honest.
The practical advice: re-jam at lunch. Re-jam after every battery swap. Re-jam any time a device power-cycles. It takes fifteen seconds and saves hours of manual sync repair.
Free-run vs record-run
Free-run timecode runs continuously; the counter advances whether the camera is recording or not. Time-of-day timecode is the most common free-run mode: you jam the camera to a clock at call time and the timecode tracks wall time from there.
Record-run timecode only advances while recording. Each clip starts from where the last one stopped, so you get a continuous, gap-free counter across the day’s media.
Free-run is better for multicam sync because every camera shares the same time reference, so matching cameras to sound is trivial when they all agree on what time it is. Record-run is better for single-camera workflows where you want clip durations to be obvious and don’t need to sync against external sources.
The mistake people make is mixing the two. Camera A is free-run, camera B is record-run, and post gets an EDL where the timecodes look reasonable but don’t actually correspond to the same moments in time. This is particularly insidious because the conform will appear to succeed. It’ll find frames at those timecodes. They just won’t be the right frames.
LTC vs VITC: two roads, same destination
Linear Timecode (LTC) is an audio signal, a frequency-shift-keyed bitstream that encodes timecode as sound. You can record it on an audio track, send it down a cable, even hear it (it sounds like a high-pitched warble). LTC is what jam-sync uses. It’s robust, simple, and works at any speed except zero. When the tape stops, the audio stops, and you lose your position.
Vertical Interval Timecode (VITC) is embedded in the video signal itself, written into the blanking interval between fields. It’s readable at standstill and at slow speeds, which makes it the right choice for frame-accurate shuttle and jog. But it’s only available in baseband video; once you’re working with file-based media, VITC doesn’t apply.
In modern file-based workflows, timecode is metadata rather than a signal. It’s a number in the file header, not a waveform or a blanking-interval pattern. But the LTC/VITC distinction still matters when you’re on set, because the physical jam-sync cable carries LTC, and the camera’s internal timecode track is the descendant of VITC. If you’re troubleshooting a sync problem, knowing which path the timecode took (audio channel or metadata header) tells you where to look for the error.
Practical sync troubleshooting
When sync goes wrong, it’s almost always one of four things:
Frame rate mismatch. The camera shot at 23.976 but the project is set to 24.000. One frame of drift per thousand frames. Invisible at first, obvious by act three.
Drop-frame / non-drop-frame confusion. The source is DF, the timeline is NDF (or the reverse). Cuts slide progressively further from where they should be, starting clean and ending wrong.
Jam-sync drift. Camera and recorder were jammed at call time but drifted over the day. Sync is perfect for the morning’s takes and half a frame off by wrap. The fix is to nudge the sound, not to re-jam after the fact.
Embedded vs external timecode disagreement. The file header says one timecode, the audio-track LTC says another, and your NLE is reading the wrong one. This happens more often than you’d think with dual-system recording, and the solution is knowing which timecode source your NLE prefers and making sure it matches what was set on the recorder.
The instinct when sync is wrong is to start nudging things by hand. Resist it. Find the systematic error first. If it’s a frame-rate mismatch, fix the project settings. If it’s a DF/NDF problem, re-conform. If it’s drift, apply a single offset to the entire reel. Manual nudges fix the symptom. Understanding the timecode fixes the cause.