Five timecode disasters and what they teach you
A production shoots multicam with mixed DF and NDF. The drift starts at 3.6 seconds per hour. By day two, the conform is unsalvageable.
Timecode failures don’t announce themselves. They start quiet. Everything looks fine on set. The jam-sync reads back correctly. The cameras are rolling, the sound recorder is rolling, and the little numbers on every device match. Then the footage arrives in post, and the editor discovers that nothing lines up.
The cost is always the same: post-production hours. The most expensive hours on any production, spent on work that should have been unnecessary.
Here are five ways it goes wrong.
1. Mixed drop-frame and non-drop-frame on multicam
Three cameras are covering a live event. Two are set to drop-frame timecode at 29.97 fps. The third, rented from a different house and prepped by a different crew, is running non-drop-frame at the same frame rate. All three cameras jam-sync to the same master clock at call time. The timecode values match. Everyone confirms sync. Shooting begins.
For the first 59 seconds, there is no problem. All three cameras produce identical timecode values. The divergence begins at the one-minute mark. Cameras 1 and 2, running drop-frame, skip frame numbers 00 and 01 at the start of minute one. Their count jumps from 00:00:59:29 to 00:01:00;02. Camera 3, running non-drop-frame, counts straight through: 00:00:59:29 to 00:01:00:00.
Camera 3 is now two frame numbers behind the other two cameras. The actual video frames are still in real-time sync because all three cameras are running at the same 29.97 fps rate from the same jam point. But the labels on those frames disagree.
The error compounds at every minute boundary except the tenth minutes. By the ten-minute mark, camera 3 is 18 frame numbers behind. By one hour, the gap is 108 frame numbers. That is 3.6 seconds of timecode discrepancy on footage that is, in reality, perfectly synchronous.
When the editor imports all three angles and syncs by timecode, the NLE finds frames with matching timecode values that are actually 2 to 108 frames apart in real time, depending on how far into the shoot they fall. Cuts between camera 1 and camera 3 produce visible jumps. Lip-sync drifts progressively. And the error is non-linear, because it accumulates at minute boundaries and pauses at every tenth minute. Manual correction is not just tedious; it requires a per-minute offset table.
The only reliable recovery is audio waveform sync. The timecode from the non-drop camera is useless for multi-camera alignment against the drop-frame cameras. The entire timecode workflow for that camera is discarded, and the sync is rebuilt from scratch using the audio.
The lesson is verification, not trust. Every device on the shoot must be confirmed to the same DF/NDF mode during jam-sync. The frame rate can match and the timecode can read identically and the counting mode can still be wrong. It only takes one camera.
2. Free-run versus record-run confusion
A documentary shoot. Two cameras and a separate sound recorder. The sound mixer sets up a master timecode generator and jam-syncs the recorder in free-run mode, so timecode runs continuously and reflects time of day. The A camera is also free-run and jam-synced.
The B camera operator, less familiar with the timecode workflow, has their camera set to record-run. In this mode, timecode only advances while the camera is recording. It starts from wherever it was last set, which might be 00:00:00:00, or might be 01:47:32:08 from the last clip of a previous shoot.
The B camera’s timecode now bears no relationship to the timecode on the A camera or the sound recorder. The sound recorder says 14:23:45:12 for a given moment. The B camera says 01:47:32:08 for the same moment. There is no offset that can reconcile them, because the B camera’s timecode only ticks when it is recording. Every time the operator stops and starts the camera, the relationship between the B camera’s timecode and real-world time changes.
It gets worse. If the B camera is power-cycled or the recording is stopped and restarted from a reset state, record-run timecode can produce duplicate values. Two completely different moments in time labelled with the same timecode. This breaks not just sync but the fundamental assumption that timecode is unique within a given source.
The post-production impact is total for the B camera. Multi-camera sync by timecode is impossible. Sound sync by timecode is impossible. Everything from the B camera must be re-synced by audio waveform or by visual cues like clappers.
There is a subtler variant worth knowing. Even when all devices are set to free-run, confusion arises if some are set to “free-run preset” (timecode starts at a manually entered value and then runs from there) while others are set to “free-run time-of-day” (timecode follows the device’s internal clock). If the preset value does not match time of day at the moment of jam, there will be a constant offset between devices. It is correctable, but only if someone documents it.
3. Crystal drift over a 12-hour shoot day
A documentary crew calls at 7 AM. Two cameras and a sound recorder jam-sync at call time. The shoot runs until 7 PM. Twelve hours, run-and-gun style. There are no breaks long enough for a re-jam, or the crew simply forgets.
The cameras use standard quartz crystal oscillators, typical of mid-range cinema cameras. A standard quartz oscillator with 20 parts-per-million accuracy drifts by approximately 0.07 seconds per hour, roughly 2 frames at 30 fps.
Over 12 hours, a single device can drift approximately 0.86 seconds from its jam point. That is 26 frames. But the number that matters is not the absolute drift of any one device. It is the relative drift between two devices, which depends on the difference in their crystal frequencies. If two cameras drift in opposite directions, the worst case is double: 1.7 seconds, or 51 frames.
In realistic conditions, with cameras at different temperatures (one shooting exteriors in the sun, one in a cool interior), a relative drift of 5 to 10 frames over 12 hours is common. At 30 fps, 5 frames is 167 milliseconds. That is clearly visible lip-sync error. Audiences notice sync problems above about 45 milliseconds.
The failure mode is insidious because it is gradual. Early footage from the morning syncs perfectly. It was just jam-synced. By mid-afternoon, dialogue is noticeably out of sync with picture. By late afternoon, it is worse. The drift is approximately linear, which means it is theoretically correctable with a time-stretch applied per clip. But detecting the drift rate, calculating the correction, and applying it to every clip in the timeline is hours of forensic work that nobody budgeted for.
The mitigation is well-established. Re-jam at every break. The industry standard is at least twice per day: once at lunch, once at any other extended pause. Better still, use temperature-compensated crystal oscillators (TCXOs), which are standard in dedicated timecode generators like the Tentacle Sync E or Ambient ACN-CL. These devices drift less than 1 frame per 24 hours. A few hundred dollars per unit eliminates the crystal drift problem entirely.
4. Wrong start-of-day timecode and midnight rollover
A broadcast production sets the master timecode generator to 01:00:00:00, which is a common convention. The hour of headroom below the programme start provides space for pre-roll, bars, and tone. But someone enters 10:00:00:00 instead of 01:00:00:00. Or the generator was not reset from a previous show and still reads 23:58:00:00.
If the master timecode is wrong, every device jam-synced to it inherits the error. All footage is recorded with incorrect time-of-day references. When this footage arrives at the post facility, the timecode values do not match the production logs, the script supervisor’s notes, or the sound reports. Any workflow that relies on timecode to correlate picture and sound, or to locate specific takes, is broken at the source.
The midnight rollover variant is more subtle and more destructive. A multi-day shoot uses time-of-day timecode without resetting at midnight. Footage shot at 11:55 PM on day one carries timecode approaching 23:59:59:29. Footage shot five minutes later at 12:05 AM on day two starts at 00:05:00:00. The timecode has wrapped past midnight.
EDLs and conform tools assume timecode is monotonically increasing within a programme. When timecode wraps from 23:59:59:29 to 00:00:00:00, some tools interpret this as a backward jump. Depending on the software, the result is rejected edits, incorrect relinking, or outright crashes. This specific failure was documented often enough in the broadcast engineering literature to become a named pitfall.
The conform failure is where the cost lands. The online editor receives an EDL or AAF referencing timecode values that do not match the source media. Every clip fails to relink. The conform must be done manually, which on a complex programme can take days. The audio post house receives split tracks with timecode that does not match picture. Every take must be synced by waveform or clapper.
If the wrong start timecode happens to overlap with footage from a different reel or day, timecode values become ambiguous. A timecode of 10:23:15:07 could refer to two completely different frames from two different recordings, and no automated tool can resolve the ambiguity.
Prevention is simple in principle: verify the master timecode generator’s output before jamming any devices. Use a standardised start-of-day convention and document it. For multi-day shoots, use date-stamped user bits or embed date information in iXML metadata to disambiguate days.
5. LTC bleed into production audio
A production uses LTC (Linear Timecode) fed via audio cable to cameras and recorders. The LTC signal is a modulated audio tone oscillating between 1,200 and 2,400 Hz, and it runs through audio infrastructure alongside production sound.
The bleed can happen in several ways. LTC recorded on a track of the production sound recorder crosstalks into adjacent audio tracks. A timecode generator connected via a splitter cable feeds both a timecode input and an audio input on a camera, with insufficient isolation between channels. An LTC cable runs in the same bundle as microphone cables, inductively coupling the timecode signal into the mic lines. Or a camera with a combined mic/timecode input on a 3.5mm jack gets crosstalk between the timecode and audio channels inside the camera body itself.
The sound of LTC bleed is distinctive and immediately recognizable: a rapid, high-pitched ticking or buzzing, like a data modem compressed into the middle of the frequency spectrum. It is present continuously for the entire duration of recording. It does not come and go. It does not change character. It is relentless.
Here is why it is nearly impossible to fix. The LTC frequency range, 1,200 to 2,400 Hz, sits directly on top of the fundamental frequencies and harmonics of human speech. This is not a coincidence of physics; it is simply that both systems operate in the band where audio equipment performs best. Any filter aggressive enough to remove the timecode signal will also remove or severely damage the dialogue it is overlapping with.
LTC is not a steady-state tone. It is a frequency-modulated signal whose frequency shifts constantly as the encoded data changes. A standard notch filter, which removes a narrow frequency band, cannot track it. You would need an adaptive filter that follows the signal’s modulation, and even then the overlap with speech harmonics makes clean separation extremely difficult.
Modern machine-learning noise reduction tools like iZotope RX and Cedar can sometimes reduce the bleed, but rarely eliminate it completely without introducing artifacts. The processing trades one kind of damage for another: less timecode buzz, more metallic or hollow-sounding dialogue.
The prevention is entirely about signal routing. Record LTC on a guard track, with at least one empty track between the timecode track and the nearest production audio track. Keep LTC cable levels at approximately -10 dBu (-20 dBFS), high enough to decode reliably but low enough to minimize crosstalk energy. Never run timecode cables in the same bundle as microphone cables. Use dedicated timecode inputs rather than splitter cables. And where possible, avoid recording LTC on production audio tracks entirely. Modern wireless timecode devices can write timecode directly to camera metadata without passing through the audio path.
The common thread
Every one of these failures shares a root cause. Timecode is a system of assumptions. It assumes all devices agree on the frame rate. It assumes they agree on the counting mode. It assumes all clocks are synchronized. It assumes the starting value is correct. It assumes the signal path is clean.
When any of these assumptions is violated, and in the field, under pressure, with rented equipment and mixed crews, they are violated constantly, the result is not graceful degradation. It is binary failure. The timecode either matches and sync works, or it does not and you are rebuilding from scratch.
The unifying lesson is that timecode problems are verification problems. Every disaster on this list could have been prevented by a check that takes less than thirty seconds: confirm DF/NDF mode, confirm free-run, re-jam at lunch, verify the start value, listen to the audio tracks. The thirty seconds you skip on set become thirty hours in post.
References
- Alister Chapman, Notes on Timecode and Timecode Sync for Cinematographers
- Frame.io, Timecode and Frame Rates: Everything You Need to Know
- Frame.io, Video Post-Production Workflow Guide: Timecode
- Tentacle Sync, Sync E specifications
- B&H Photo, Timecode versus Sync: How They Differ and Why it Matters
- Production Expert, Timecode Part 4: Practical Applications
- Production Expert, Why Does Timecode Not Start At Zero?
- JW Sound Group, Audio Bleeding: Timecode Systems Ultrasync One Splitter Cable
- Tentacle Sync Forum, Audio Bleed on Camera Media
- Creative COW, Using timecode, LTC, SMPTE on a film set