The following article was originally published on NoFilmSchool here.
Recently, the Society of Motion Picture and Television Engineers (SMPTE) organized a meeting to review the standardization for Ultra High-Definition Television (UHDTV). The need for standards is especially important since shipments of ultra high-def TV sets are expected to reach four million units by 2017.
Before attending the meeting, I reviewed the committee’s thorough report on their findings thus far. One of the more impressive facts is that the range of colors UHDTV can display encompasses nearly “twice the colors perceived by humans or that can be captured by a camera.”
Two standards are actually being developed. Simply called UHDTV1 and UHDTV2, the easiest way to distinguish them is by their frame dimensions. UHDTV1 would have a 4k resolution of 3,840 x 2,160 pixels whereas UHDTV2 would have a whopping 8k resolution of 7,680 x 4,320 pixels. The standards would contain 10 and 12 bit depth, with chroma subsampling options at 4:4:4, 4:2:2 and 4:2:0. 8 bit, as well as interlacing and fractional framerates, would be discarded. The likely base framerate would be 120 frames per second, due in part that 120 is divisible by many popular framerates such as 24, 30, and 60. At such a high framerate, the “flicker fusion threshold,” a technical term for image flicker, would be greatly reduced.
This all seems like a great step forward. However, at the meeting I attended, it was clear there are numerous issues that confront the emerging technology. I spoke with John “Pliny” Eremic, an active member of the SMPTE Standards Community who now works at HBO. As former post-production manager at Offhollywood and co-owner of the first two shipping RED cameras, he’s been poised at the cutting edge of the video frontier for some time. Pliny says:
UHD is about more than spatial resolution. The areas where [the Standards Community is] looking to push the image are dynamic range, peak luminance, wider color gamut, temporal resolution meaning framerate, and spatial resolution.
To Pliny, the most important of these is dynamic range, and I tend to agree. Increasing only the resolution would do nothing to improve the image without also increasing the other aspects of the image, a detail consumer TV and camera manufacturers often seem to forget. Pliny goes on:
If you want to display more colors, there are certain colors you can’t hit unless you have a higher peak brightness. If you have higher peak brightness overall, the flicker fusion threshold actually changes. So an image that looks constantly illuminated when you are at 100 nits [a unit of measure for luminance], if you crank it up high enough, suddenly that same image looks flickery. Now you have to increase your refresh rate just to maintain the status quo of appearing constantly illuminated. If you have wider dynamic range on the display you’re going to need more bits to cover it to not get banding in things like skies and gradients. So all these things need to move in unison.
Besides considerations relating to image quality, other issues pertain to the physical cabling that carries the signals. As of now, a 6G-SDI cable is unable to transport a 4k video signal running at 60 frames per second at 12 bit in 4:4:4 color space. Two of them can’t even do it. To bandage the situation, more cables would need to be added into the pipeline, something that SMPTE board member Bill Miller considers unsustainable.
During his presentation at the SMPTE meeting, he delves into further detail to clarify some of the points in the report, and states we need new SDI technology that is capable of more data throughput, or an improvement in the image compression technology. Higher framerates are necessary, he says, and illustrates this visually with a high-motion image shot at 100 frames per second and the same subject at 50 frames per second.
The 100 frames per second image is crisp. There’s even text in the image that can be read due to the video running at a higher shutter speed. The 50 frames per second image looks like the same motion-blurred image we’re accustomed to seeing in a movie clip. Miller maintains that if we’re not going to end up with a crisper image after we increase the resolution, what’s the point?
More frames means more data, and with 8k cameras shooting up to 72 gigabits per second, this data management soon becomes serious. The challenge is on for countries like Japan, who want to broadcast the 2020 Olympics in 8k.
As insurmountable as these issues seem, it’s prudent to consider them now to establish solid standards before hardware is developed and built. It will help ensure that technology is properly implemented and systems are integrated with a complete production pipeline that ends with a greatly enhanced viewing experience.