Why the frame is green
You decoded a 10-bit ProRes frame and got a wall of green. The file isn't broken. Your texture format is.
You open a 10-bit ProRes file in your custom playback tool and the viewer fills with green. Not a little green tint. Full, saturated, Kermit green. The file plays fine in QuickTime. Resolve shows it correctly. Your tool shows a green rectangle with maybe a faint ghost of detail if you squint.
This is one of the most common bugs in video decoder integration, and it has a single, specific cause. The pixel values are being normalised against the wrong range.
What you’re actually looking at
That green frame is not random. It’s your actual image data, but every channel is being read as a value very close to zero.
A 10-bit video signal uses code values from 0 to 1023. When you upload those values to a GPU texture as 16-bit unsigned normalised (R16Unorm in Metal and Vulkan, DXGI_FORMAT_R16_UNORM in DirectX), the GPU normalises against the full 16-bit range: 0 to 65535. Your brightest pixel, code value 1023, becomes 1023 / 65535, which is approximately 0.0156. Your shader sees a value of 0.016 where it expected 1.0.
The result: luma is nearly zero and both chroma channels are nearly zero. And zero chroma in YCbCr space has a very specific colour.
Why zero means green
Video is almost never stored as RGB. It’s stored as YCbCr, where Y is luma (brightness) and Cb/Cr are colour difference channels. The neutral point for chroma is not zero. In a normalised 0.0 to 1.0 range, neutral chroma sits at 0.5 (corresponding to code value 512 in 10-bit, or 128 in 8-bit). A Cb of 0.5 and a Cr of 0.5 means “no colour.” Grey. Neutral.
When both Cb and Cr are at 0.0 instead of 0.5, the YCbCr-to-RGB conversion matrix produces a strong green bias. The BT.709 conversion matrix defines luma as 0.2126R + 0.7152G + 0.0722B. Green dominates the luma coefficient because human vision is most sensitive to green. When you invert that relationship with zero chroma offset, the math pushes everything toward the green primary.
Specifically, with Y near zero and Cb/Cr at their minimum instead of their midpoint, the conversion produces negative R and B values (which clamp to zero) and a small positive G value. Green channel survives. Red and blue do not. The result is a green frame.
This is not a coincidence or a quirk of a particular implementation. It falls directly out of the matrix maths. Any YCbCr system built on BT.601, BT.709, or BT.2020 coefficients will produce green when chroma channels are biased toward zero. The green primary always carries the most luma weight, so it’s the last channel standing when everything else clips.
The R16Unorm trap
The core problem is a mismatch between the data’s actual bit depth and the GPU texture format’s normalisation range.
When a decoder hands you a 10-bit Y plane, the values sit in the range 0 to 1023. But they’re stored in 16-bit words, typically left-shifted to the top of the 16-bit range (values 0 to 65472 in steps of 64) or packed at the bottom (values 0 to 1023 with the upper 6 bits zero).
If the decoder packs values at the bottom of the word and you use R16Unorm, the GPU divides every value by 65535. Your 10 bits of real data occupy less than 2% of the normalised range. The image is there, technically, but it’s crushed into a sliver near zero.
The correct format for bottom-packed 10-bit data is R10X6Unorm (Metal’s .r16Unorm with manual rescaling, or Vulkan’s VK_FORMAT_R10X6_UNORM_PACK16). This format tells the GPU that only the top 10 bits are significant, so it normalises against 1023 instead of 65535. Your peak white becomes 1.0, as it should.
If your API doesn’t support R10X6 natively, the fix is to multiply by 65535.0 / 1023.0 (approximately 64.0) in your shader. Some developers left-shift the data to the top of the 16-bit word before upload instead, which also works: a value of 1023 becomes 65472, which normalises to 0.9990 under R16Unorm. Close enough for display, though not bit-exact.
Why ProRes, DNxHR, and H.265 trigger this
This bug appears disproportionately with ProRes, DNxHR, and H.265 10-bit content because these are the formats most likely to deliver 10-bit planar YCbCr data to your application.
ProRes always decodes to planar YCbCr. The Apple VideoToolbox decoder on macOS hands you a CVPixelBuffer with a pixel format like kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange or kCVPixelFormatType_422YpCbCr10BiPlanarVideoRange. The “10Bi” means 10-bit biplanar. The Y and CbCr planes each use 16-bit words with the 10-bit values in the high bits.
DNxHR follows a similar pattern through its decoder. H.265 10-bit (common in mirrorless camera output, screen recordings, and streaming deliveries) decodes to the same family of pixel formats.
8-bit content rarely triggers this bug because R8Unorm normalises against 255, and 8-bit data fills the full 0 to 255 range. The normalisation is correct by default. It’s the jump to 10-bit in a 16-bit container that creates the mismatch.
This is also why the bug tends to appear when developers move from 8-bit to 10-bit support. The 8-bit path worked perfectly. The 10-bit path uses the same upload logic, same texture format, and suddenly everything is green.
The partial green variant
Sometimes you don’t get a solid green frame. You get an image that’s recognisable but with a heavy green cast and crushed contrast. This usually means one of three things:
Only the chroma planes are wrong. The Y plane is being normalised correctly (perhaps it’s being handled as a different texture format or the decoder left-shifted it) but the CbCr plane is still being divided by 65535 instead of 1023. Luma looks roughly correct, so you see a dim, low-contrast, monochrome-ish image with a green push because the chroma offset is wrong.
The UV offset is missing. Even with correct normalisation, if your shader doesn’t subtract 0.5 from the Cb and Cr channels before the matrix multiply, you get a colour bias. The neutral point for chroma is 0.5 in normalised space, not 0.0. Skipping the offset shifts every pixel’s colour. Depending on the content, this can produce a green or magenta tint across the whole frame.
Mixed bit depths across planes. Some decoders output the Y plane at one bit depth and chroma at another (or with different packing). If you’re treating all planes identically and they aren’t identical, one set of channels will be wrong while the other is right.
Each of these produces a different-looking image from the same file, and the common thread is that the pixel data is fine. The interpretation is wrong.
How to diagnose it
If you’re seeing green, work through this checklist:
Check your texture format. Is the GPU texture format’s normalisation range appropriate for the actual bit depth of the data? For 10-bit data in 16-bit words, R16Unorm is almost certainly wrong unless the data is left-shifted to fill the 16-bit range.
Check the decoder’s packing convention. Does your decoder put 10-bit values in the high bits or the low bits of the 16-bit word? VideoToolbox on Apple platforms uses high-bit packing. FFmpeg’s p010 format uses high-bit packing. Other decoders may differ. Read the documentation, then verify by inspecting the raw buffer values for a known test pattern.
Check your normalisation. After the GPU normalises the texture sample, does a peak-white pixel produce a value near 1.0 in your shader? If it produces 0.016, you have the R16Unorm problem. Multiply by 65535.0 / 1023.0, or switch to a format that understands 10-bit data.
Check your UV offset. After normalisation, are you subtracting 0.5 from Cb and Cr before applying the colour conversion matrix? If not, your neutral axis is wrong and everything will have a colour cast.
Check your matrix coefficients. Make sure the YCbCr-to-RGB matrix matches the source content. BT.709 for HD. BT.2020 for UHD. BT.601 for SD. The wrong matrix won’t produce green, but it will produce subtly wrong colour that’s hard to debug on top of a normalisation issue.
Render a test frame. Upload a known pattern (say, 10-bit values for 75% colour bars) and verify every patch. If your white patch comes out as very dark green, the normalisation is wrong. If the patches are approximately right but hue-shifted, the UV offset or matrix is wrong.
The real lesson
The green frame is actually a useful diagnostic. It tells you something very specific: your values are being normalised against a range they don’t fill. Once you know that, the fix is mechanical. Match the texture format to the data’s actual bit depth, or rescale in the shader.
The harder bugs are the ones that produce a plausible-looking image that’s subtly wrong. A frame that’s slightly dim. Colours that are almost right but shifted by a degree or two on the vectorscope. Those errors come from the same family of causes (wrong normalisation, wrong offset, wrong matrix) but at a scale that doesn’t trigger the obvious green alarm.
When your frame is green, the pipeline is telling you exactly where to look.
References
- ITU-R BT.709-6: Parameter values for the HDTV standards for production and international programme exchange
- ITU-R BT.2020: Parameter values for ultra-high definition television systems for production and international programme exchange
- Apple Developer Documentation: CVPixelBuffer pixel format types and biplanar YCbCr formats
- Vulkan Specification 1.3: VK_FORMAT_R10X6_UNORM_PACK16 and related multi-bit packed formats
- Metal Feature Set Tables: Pixel format capabilities for R16Unorm and packed 10-bit formats
- Microsoft DirectX: DXGI_FORMAT enumeration and video format support
- Charles Poynton, Digital Video and HD: Algorithms and Interfaces (2012), chapters on YCbCr encoding and normalisation