The Future of Ultra-HD: A Recent SMPTE Meeting Update

The following article was originally published on NoFilmSchool here.

UHD

Recently, the Society of Motion Picture and Television Engineers (SMPTE) organized a meeting to review the standardization for Ultra High-Definition Television (UHDTV). The need for standards is especially important since shipments of ultra high-def TV sets are expected to reach four million units by 2017.

Before attending the meeting, I reviewed the committee’s thorough report on their findings thus far. One of the more impressive facts is that the range of colors UHDTV can display encompasses nearly “twice the colors perceived by humans or that can be captured by a camera.”

Two standards are actually being developed. Simply called UHDTV1 and UHDTV2, the easiest way to distinguish them is by their frame dimensions. UHDTV1 would have a 4k resolution of 3,840 x 2,160 pixels whereas UHDTV2 would have a whopping 8k resolution of 7,680 x 4,320 pixels. The standards would contain 10 and 12 bit depth, with chroma subsampling options at 4:4:4, 4:2:2 and 4:2:0. 8 bit, as well as interlacing and fractional framerates, would be discarded. The likely base framerate would be 120 frames per second, due in part that 120 is divisible by many popular framerates such as 24, 30, and 60. At such a high framerate, the “flicker fusion threshold,” a technical term for image flicker, would be greatly reduced.

This all seems like a great step forward. However, at the meeting I attended, it was clear there are numerous issues that confront the emerging technology. I spoke with John “Pliny” Eremic, an active member of the SMPTE Standards Community who now works at HBO. As former post-production manager at Offhollywood and co-owner of the first two shipping RED cameras, he’s been poised at the cutting edge of the video frontier for some time. Pliny says:

UHD is about more than spatial resolution. The areas where [the Standards Community is] looking to push the image are dynamic range, peak luminance, wider color gamut, temporal resolution meaning framerate, and spatial resolution.

To Pliny, the most important of these is dynamic range, and I tend to agree. Increasing only the resolution would do nothing to improve the image without also increasing the other aspects of the image, a detail consumer TV and camera manufacturers often seem to forget. Pliny goes on:

If you want to display more colors, there are certain colors you can’t hit unless you have a higher peak brightness. If you have higher peak brightness overall, the flicker fusion threshold actually changes. So an image that looks constantly illuminated when you are at 100 nits [a unit of measure for luminance], if you crank it up high enough, suddenly that same image looks flickery. Now you have to increase your refresh rate just to maintain the status quo of appearing constantly illuminated. If you have wider dynamic range on the display you’re going to need more bits to cover it to not get banding in things like skies and gradients. So all these things need to move in unison.

UHD

Besides considerations relating to image quality, other issues pertain to the physical cabling that carries the signals. As of now, a 6G-SDI cable is unable to transport a 4k video signal running at 60 frames per second at 12 bit in 4:4:4 color space. Two of them can’t even do it. To bandage the situation, more cables would need to be added into the pipeline, something that SMPTE board member Bill Miller considers unsustainable.

During his presentation at the SMPTE meeting, he delves into further detail to clarify some of the points in the report, and states we need new SDI technology that is capable of more data throughput, or an improvement in the image compression technology. Higher framerates are necessary, he says, and illustrates this visually with a high-motion image shot at 100 frames per second and the same subject at 50 frames per second.

UHD samsung

The 100 frames per second image is crisp. There’s even text in the image that can be read due to the video running at a higher shutter speed. The 50 frames per second image looks like the same motion-blurred image we’re accustomed to seeing in a movie clip. Miller maintains that if we’re not going to end up with a crisper image after we increase the resolution, what’s the point?

More frames means more data, and with 8k cameras shooting up to 72 gigabits per second, this data management soon becomes serious. The challenge is on for countries like Japan, who want to broadcast the 2020 Olympics in 8k.

As insurmountable as these issues seem, it’s prudent to consider them now to establish solid standards before hardware is developed and built. It will help ensure that technology is properly implemented and systems are integrated with a complete production pipeline that ends with a greatly enhanced viewing experience.

2013 International Drug Policy Reform Conference Highlights Increased Support For Deregulation

This article was originally published in the International Digital Times here.

 

weed photo

I’m packed inside a crowded elevator at the Sheraton Downtown Denver, Colorado’s largest hotel. As the car races down and settles onto the concourse level, someone in front of me remarks, “These drug guys are really serious.”

He’s referring to the attendees of the four-day Reform Conference that collects advocates from all corners of the globe to discuss sensible drug policy reform.

Speaking to thousands gathered in a cavernous conference hall, the opening speaker remarks, “Colorado was always the Mile-High City…even before marijuana reform.” The Centennial State, being on the forefront of recent marijuana legalization, is a fitting place to hold the summit. Beginning in January of next year, it will be legal to purchase marijuana in Colorado and Washington for recreational purposes as part of Amendment 64 and Initiative 502, respectively.

Ethan Nadelmann, Executive Director of the Drug Policy Alliance (DPA) that organized the conference, takes the stage to deliver a rousing, passionate speech delivered with such conviction that cheers burst from the audience. “We’ve hit the tipping point on marijuana. Fifty-eight percent of our citizens say it’s time to legalize it,” he says, citing a recent Gallup poll that shows the plant has gained increasing widespread acceptance since 2001, when only a third supported legalization. Our relationship to cannabis has changed from even five years ago, as evidenced by the recent boon of medical marijuana.

“There are three types of people here,” says Nadelmann, referring to the assembly of people gathered for the speech. “There are those who use drugs responsibly. Then there are the people that hate drugs, who’ve seen the horrors of drug addiction firsthand. And then there are those who don’t give a damn about drugs, but don’t want a war permeated on racism and increasing prison populations.”

A conference based on advocacy groups and public outreach needs this kind of fire if it’s going to affect real change. The reform movement isn’t about legalizing drugs so we can sit zonked out on the couch unable to move. It’s about fewer incarcerations for minor drug offenses. It’s about providing information about illicit substances to partygoers so they can be taken more responsibly. It’s about providing clean needles to addicts so they are less at risk of contracting or spreading HIV.

One might think the conference is comprised of Burning Man burnouts and acid casualties left over from the nineties rave scene. Rather, the turnout has brought together professors from Ivy League institutions, law enforcement officials against drug prohibition, concerned mothers against the drug war, and libertarians and conservatives that maintain they should have the freedom to do what they want to their bodies. It’s an enormous cross-section of people, perhaps because the issue cuts across many lifestyles, at the center of it all being basic human rights, many of which have been eroded over time by the pointless slog of Nixon’s War on Drugs.

The conference addresses the complex issue of how we interact with illicit substances through intellectual discourse. Some of the varied topics discussed involve the politics of drug research, challenges in dealing with the United Nations, harm reduction, and how the legalization of psychedelics may be a very different journey than the one for marijuana. A vendor area outside the panel rooms hosts a plethora of advocacy groups, many of which have lengthy reports free for the taking. With staples of the scene such as the National Organization for the Reform of Marijuana Laws (NORML) and the American Civil Liberties Union (ACLU) but also lesser-known groups like Law Enforcement Against Prohibition (LEAP), the Harborside Health Center cannabis dispensary, and Good Chemistry, there is a lot of information to soak in. There are several film screenings as well as Alcoholics Anonymous (AA) and Narcotics Anonymous (NA) sessions provided in tandem with the conference.

I attended a similar conference in Oakland earlier this year organized by the Multidisciplinary Association for Psychedelic Studies (MAPS), an organization at the forefront of developing legitimate experiments on various psychoactive compounds sanctioned by both the Drug Enforcement Administration (DEA) and the Food and Drug Administration (FDA). This conference is more research-oriented than Reform, and is centered on single speakers explaining recent findings in their clinical trials. One such study concerns MDMA, or Ecstasy, which has sparked recent concern in the news as the current party drug. Researchers at MAPS are finding the substance, which began as a therapeutic drug, has enormous value in a clinical setting for helping war veterans deal with post-traumatic stress disorder. LSD (“acid”) and Psilocybin (“magic mushrooms”) have been administered to terminally ill cancer patients to help ease their end-of-life anxiety. Ayahuasca and Ibogaine have been utilized to cease smoking in people inhaling over a pack a day, and most recently, concerned parents petitioned for permission to use a marijuana tincture to treat their five year-old son’s seizures.

These kinds of research seem to suggest drugs may have some uses given the appropriate context. Substances traditionally thought of as harmful could be thought of as potential tools for understanding ourselves as well as treating a range of maladies. Prohibiting alcohol during the 1920s and 30s led to an increase in mafia activity and an underground criminal market. It is no historical secret that alcohol continued to be consumed in large quantities. Similarly, drug prohibition has not been shown to cease usage. The lack of information and safety, compounded with an uncontrolled black market fueled by existent consumer demand, leads to more indirect victims of the drug war. This was most recently seen in the tragic deaths at the Electric Zoo Festival in New York City, deaths that might have been prevented if the users were informed of the composition of the substances they were taking. Decriminalization and regulation of scheduled substances points to a better way of interacting with narcotics. The Transform Drug Policy Foundation has helped launch this conversation by publishing an exhaustive report titled “After the War on Drugs: Blueprint for Regulation” which discusses five possible regulatory scenarios of varying restrictiveness that could provide better alternatives than the systems currently in place. These are modeled after regulatory systems for more societally acceptable substances, such as alcohol, tobacco, and prescription medications.

Though marijuana distribution will soon become a reality for the denizens of Colorado, there is still a lot of work for improving public policy and generating public awareness. During the conference, a victory march for Amendment 64 was organized through the 16th Street Mall, the main strip of retail stores and restaurants in downtown Denver. As conference attendees shouted for “No more drug war,” it reminded me that while there was progress, the battle was far from over. Still, the amendment is a testament to how social mores can change if the right message, coupled with accurate information, is put behind it. The comment the observer in the elevator made about “drug guys” being serious is true, because these are issues worth being serious about.

On the final day of the conference, Nadelmann again rallies his troops by showing a video shot by the Hungarian Civil Liberties Union covering the conference. I’m surprised by how quickly the film came together; it’s surreal to be watching an event I’m still attending.

The conference concludes with Ira Glasser, the Board President of the DPA and former Executive Director of the ACLU. As if echoing my sentiment from the march, he states that, “We haven’t won yet. This isn’t because it’s only two out of fifty states, but because it’s one issue of many.” Speaking in a slow, thoughtful voice, he continues, “This isn’t the beginning of the end, but it just may be the end of the beginning.”

Colorist/Editor profile on PostPerspective

Originally published on PostPerspective here.

Meet the Colorist/Editor: Tristan Kneschke

temp press photo

NAME: Tristan Kneschke

COMPANY: New York City-based Exit Editorial (www.exitedit.com)

CAN YOU DESCRIBE YOUR COMPANY?
My one-man company provides offline editorial services and specializes in color grading. I have a fully mobile DaVinci Resolve kit that I messenger around town in cases for clients without resources to put together their own system.

I set up at their facility as a way of simplifying the workflow so that editorial changes can happen in tandem, or my client can jump into a meeting nearby while I continue to work.

WHAT’S YOUR JOB TITLE?
CEO, Founding Editor and Colorist

WHAT DOES THAT ENTAIL?
In addition to performing the work of an editor and colorist, I act as my own producer, which means there’s lots of schedule balancing every week. Since my set-up is mobile I coordinate pick-up and drop-offs with messengers. If I need to set up for a job, I arrive at the facility to get everything arranged the day before so that in the morning the client and I can focus on the work without any technical issues.

WHAT WOULD SURPRISE PEOPLE THE MOST ABOUT WHAT FALLS UNDER THAT TITLE?
I regularly meet people who don’t understand what a colorist does. I tell people that it’s similar to Photoshop retouching, but for the moving image. If I have the opportunity to demo the system in front of them, I love watching them discover the magic of what’s possible.

nissan

A recent Nissan spot.

WHAT’S YOUR FAVORITE PART OF THE JOB?
The most exciting aspect is a job I’m really psyched about. I love working with great creatives who exhibit good taste and improve the project. Some days can be what I call a “battle for the reel,” but every once in a while all the stars align and the work becomes more than the sum of its parts.

WHAT’S YOUR LEAST FAVORITE?
I spend more time than I’d like hounding late payments.

WHAT IS YOUR FAVORITE TIME OF THE DAY?
I think mysterious things happen at night. While I love my job and wouldn’t trade it for anything, I also have a life outside work, and it’s great to reconnect with the people that enrich my life on a weekly basis.

IF YOU DIDN’T HAVE THIS JOB, WHAT WOULD YOU BE DOING INSTEAD?
I would likely be writing or be involved with music. I do both on the side now.

WHY DID YOU CHOOSE THIS PROFESSION? HOW EARLY ON DID YOU KNOW THIS WOULD BE YOUR PATH?
My mother is a photographer and my dad is an electrical engineer. So being a colorist combines the two aspects I’ve inherited from my parents, which is artistic attention to detail coupled with a technical-minded way of studying things. I pursued the film industry because it balances these two characteristics perfectly.

I originally went to school for music, but then found I enjoyed directing films. When I got my first internship at a post house I realized I preferred that, and color grading developed out of a desire to improve the work I was cutting at the time.

colgate_kelly

Colgate with Kelly Ripa

CAN YOU NAME SOME RECENT PROJECTS YOU HAVE WORKED ON?
I’ve been working on jobs for Holiday Inn, Amazon, Target, Victoria’s Secret, Royal Caribbean and Colgate.

NAME THREE PIECES OF TECHNOLOGY YOU CAN’T LIVE WITHOUT.
My control surface (The JL Cooper Eclipse) for coloring is a must, it makes me much faster and prevents carpal tunnel! A Wacom tablet provides quicker responses than a mouse, and my Nord synthesizer because it makes the sickest sounds. Sometimes it’s just fun to see what kind of crazy stuff it can make.

WHAT SOCIAL MEDIA CHANNELS DO YOU FOLLOW?
In addition to postPerspective, I regularly follow news on Feedly and another industry blog, NoFilmSchool.

A Color Grading Case Study of the New Music Video from Hip-Hop Artist Killer Mike

This article was originally published on film industry resource No Film School here.

Even with the growing prominence of cheaper color correction systems, the craft of color grading is still mysterious to many, including those who work in post-production. I’m often asked how I approach specific projects or how I achieve particular looks so I thought it would be helpful to illustrate some of my methodologies with a music video for the rap artist Killer Mike. Beyond nerding out on Resolve, I hope the reader will start to see that there is a lot that happens outside of the software.

Here is the music video. There is some NSFW language:

I had worked with director Ben Dickinson and cinematographer Adam Newport-Berra on several projects by the time we worked on this one, so I already knew somewhat how they worked. A positive pre-existing relationship can help the work tremendously because aesthetically you are aware of where the other creatives may want to take things, where their sensibilities lie, and what elements in the frame they are likely to be drawn to. Ben and Adam don’t just rely on one signature style, but approach the grade from an angle that makes sense to that specific project. So they don’t just always have super-crushed blacks or popped colors.


The facilities at Pleasant Post also provided a great environment in which to grade, particularly because their color suite houses an OLED monitor, which was a huge help especially on this video as the monitor displayed beautiful blacks, enabling me to dial in subtle details from the black chasm the talent inhabits. Due to the large amount of Red work Pleasant edits and grades, they invested in a Red Rocket so render times didn’t take an eternity.

I always ask for a rough cut before I step onto a project not only to begin thinking about the appropriate palette but also to diagnose any potential issues before they occur, from separating graphical plates, to multiple camera formats, to seeing if the client has an unrealistic expectation of what can be achieved (for example, bringing back overexposed skintones or changing an element to a radically different color). When I received the rough cut I was excited as it was a very different type of rap video, and care had been given in the mise-en-scene.

The look’s general direction consisted of desaturation with warmth, a nod to Renaissance paintings.

We scheduled four hours to grade on the day of the session, which I thought would be enough time as the edit was not very complicated and some of the shots were repeated setups, though I still needed to check these shots to make sure an exposure or color change hadn’t occurred across their duration. Ben and Adam needed to shoot a pickup shot (the opening shot with the hourglass) so we discussed the grade beforehand and let me work while they got the shot. They had a ton of references organized into a PDF that they had culled from Renaissance paintings which we used as a conversational starting point. They were drawn to paintings that popped one color element, usually someone’s clothing, as a way to guide the eye. Whenever possible I made a note to achieve this (one good example is the black Madonna with her child). Blacks were to be black, as opposed to a tinted black, and not overly crushed. The image was to have a golden look to mimic the era’s palette, with relative desaturation to give it a more natural feel. Skintones can actually be very desaturated without feeling processed. It just depends on the creative’s sensibilities.

I actually prefer to have the edit project instead of a XML generated from the project as many editors are not informed on how to prep timelines adequately for Resolve beyond just getting all the clips down to one video track. There is actually much more to the way I prep the timeline to help me later in the session. On the timeline I can easily diagnose things I know are going to trip up Resolve, such as multiclip camera angles or odd effects that were left on the clips. I can get rid of edit points in the same clips that were added as part of the editor’s process. I can also see which shots are blown up, sped up, or flopped. I use the multiple video layer capability of XMLs as a way to flag clips to myself visually. For example, sometimes I put every shot that is blown up on the second video layer, or on a multiple-format project, I may opt to put Alexa footage on V1, Red footage on V2, and 5D/7D footage on V3. This is an easy way to answer the client’s question of what format the current shot you’re working on was shot on and to know which shots need resizing.

It’s also from the project file that I create my reference picture to make sure my XML came in correctly. Sometimes editors make H264s that lose timecode, making me go into the edit program to reexport anyway.

I loaded all the raw Red footage manually which I requested to have on the drive, and loaded the XML without selecting “automatically import source clips into media pool” as the video was cut using Prores Quicktimes, the most common Red-Final Cut workflow.

Resolve has the ability to link a rough cut with your XML after you bring it in. After loading my session the first thing I do after loading the footage in is to load the rough cut and compare all the shots. Even with cinestyle cameras such as the Reds and the Alexa which contain timecode and metadata, for some reason during this project some clips would be one clip off from the correct shot. Using the Final Cut project that was given to me made it easy to find the correct shot and force conform with the correct clip. This is an essential preparatory step as there is nothing worse than loading a cut with the wrong shot and having the client call you out on it. Neglecting this step is an immediate way to lose credibility.

Resolve automatically groups shots that share the same dailies together so you only need to grade the shot once for it to apply to all of those setups. I went through and grouped other shots that were clearly the same setup, but just not part of the same dailies to make this process even more autonomous.

On to the actual grading. I began by crushing the blacks in each shot and building in a healthy bit of contrast, not spending more than thirty seconds on each shot to accomplish an initial correction before moving on to the next shot. I really had no idea when Ben and Adam would get to the session but it would be best to be able to show them a work-in-progress of most of the cut as opposed to a finely-tuned first shot that they might change completely.

The black Madonna’s red tunic was hue-shifted to get exactly the right tone of red the creatives wanted. Vignettes were added to the actress and baby to bring out their faces.

Most of the images in the video ended up far below the upper legal limit of the scopes, probably around 80 IRE, resulting in images that were not too bright but by no means were dim either. I desaturated the shots while pumping in a golden yellow mostly in the mids to add an old-world feel to the overall image.

I quickly realized that the guest artist’s skintone was very different than Killer Mike’s. I inched them closer together and made a mental note that I’d have to do that throughout the whole video, particularly in shots where they both appeared.

Some shots featured extreme relighting, like this one of Killer Mike’s head on a plate. It is normal in a session for the client to keep refining the image into double-digit nodes, and Resolve is great at keeping up with the speed of the creatives. Vignettes were added to affect multiple portions of the frame, and in node #7 a vignette was used to darken the bottom of the shot to give the sense that Mike’s head was severed from his body.

I continued blasting through the setups, trying to accent at least one element in each scene, whether it was the subtleties in the clothing or the heavenly halos around the artists. When the creatives arrived they were happy with the progress thus far, and made some aesthetic changes to intentionally leave certain shots less consistent with each other. We spent time putting the Malcolm X setups into their own world, and let the closeups of both artists edge into a warmer, more saturated territory to accent their candlelit faces.

We let these setups go a little warmer to draw attention to the candlelight that clarifies the talent’s faces.

Whenever possible we warmed up the smoke by adding vignettes as I knew they wouldn’t separate cleanly with secondaries. We directed the viewer’s eye by accentuating the shafts of light in the baseball bat scene and reframed certain shots to add to the story, for example centering Mike in frame when his head was on a platter, or the very last shot where he is meant to be killing himself which was shrunken.

I used vignettes to accentuate the lighting that was already there, but did not come through as pronounced as what the director and DP intended. Here, a heavy vignette frames the shot to create the sense that a higher presence is watching Mike.

The project was rendered at 1080p and round-tripped back to Final Cut via XML where Ben applied further effects before shipping the video for an exclusive premiere on Pitchfork.

A personal favorite shot of mine occurs during a cut from a wider shot of Scar to a closer one. In the closeup, I qualified his eyes with a secondary and brightened them, giving an ethereal, heightened feel to the shot, perhaps further unsettling due to the shot’s short duration.

A Colorist’s Perspective: Practical Comparisons of DaVinci Resolve and Apple Color

This article was originally published on film industry resource No Film School here.

With the release of Apple Color several years ago, the once-niche field of high-end color grading trickled down to the average user. When Blackmagic released DaVinci Resolve on Mac it became more obvious that color grading was the next big wave. Having already been grading professionally with Color shortly after it was released, I quickly decided to invest in a traveling DaVinci Resolve Mac Pro tower. The client demand for color grading in particular, and a traveling station specifically, has grown my business at a rate I never thought possible. Now, with Resolve 9 nearing its official, non-beta release, Blackmagic has separated itself even more from Apple’s killed product.

One of my biggest challenges outside of sessions is explaining the value of this system to new or potential clients. Most of these clients are still holding onto the Color program in a similar manner as some editors are to Final Cut 7, with an attitude of “if it’s not broke, don’t fix it.” Though one should be able to achieve the same grading results from both platforms, I do attest that my work is better in Resolve, but this can’t be easily measured. What can be assessed is the speed at which those results coalesce. As a working colorist frequently in a time crunch (let’s face it, every job), the features that allow me to shave a few seconds from a certain action add up in a big way even in a session that runs just a few hours. I want to highlight some of the biggest features that save me a lot of time in most sessions.

The tracker. When I demo Resolve, even seasoned graphics guys are stunned at just how well Resolve can track. This completely changes the way I grade, as I can key more aggressively and know that the keys can be constrained with a tracked matte. Resolve picks points automatically, which means that I rarely need to redo the track. The tracker in Resolve 9 has been improved even more, where you can select part of the track that messed up and retrack just that section, or manually modify it. I hardly used the tracker in Color, partly because it was a manual tracker and also because it was painfully slow. We’re talking 1 frame per second. I used it as a last-ditch effort, usually opting for keyframing instead in the interest of speed.

Resolve’s tracker automatically picks points for you after you stick a vignette on what you want to track.

Color uses a manual tracker which is often extremely slow.

The still store. With just a few clicks I can store and recall a still extremely quickly on my control surface, then wipe and reposition it to compare with the current shot I’m working on. This is a great way for clients to evaluate, say, a medium and wide shot to check for matching skintones. Color handles this with extreme clunkiness. The stills are located in a completely different room, and the transition wipe is frequently extremely slow, making it nearly impossible to use in a serious client session.

It’s incredibly easy to save stills and call them up immediately, and pan, tilt or zoom the images as needed so you can focus on matching specific parts of the scene.

Color stores its stills in a separate room, away from the coloring, forcing the user to toggle back and forth to call them up. Panning the shot you’re on requires heading to a different room as well. The “transition” slider here is what controls the wipe. For some reason, it tends to lag when using a control surface.

Nodes. Resolve’s corrections work as a set of nodes which can be arranged in serial or parallel. You can also easily adjust the mix on the nodes when the client asks you to “split the difference.” You’d be surprised how often that one comes up.

One of the big limitations of Color is that it limits you to 8 secondaries. For some jobs, this would be more than enough. But for a typical commercial job with a tweaker client, it’s simply not. Depending on the shot, some images frequently need a lot of keys pulled and vignettes added, but I also add nodes based on the manner in which a client makes requests. Let’s say they’re firing a bunch of commands at you. I sometimes opt to execute each small change in its own separate node, and then enable and disable those nodes to show them each small change. In this way they can evaluate the image in small increments, and if they don’t like the change, you simply delete the node. If the client likes the change, sometimes they’ll ask you to apply it to the sequence as a whole. Since you’re making small changes throughout, it’s easy to grab just the last node and apply it to the end of the node tree.

Compare that to doing a ton of things within a single node and then having to show the client by hitting undo and redo several times. It’s just less immediate for the client. The point is that you’re not conservatively worried about running out of nodes. Apple Color also only has one level of undo, whereas Resolve has multiple levels of undo for each shot, not just for the overall timeline, so I can tweak a medium shot, adjust a closeup, and then go back and undo the changes I made to the medium shot.

Recalling some shots from a previous job, the center shows my personal record for number of nodes for a single shot, (21!), as well as a “simpler” shot involving 15 nodes. I averaged 13 nodes per shot on this job.

Color can hold a maximum of eight secondaries per shot in addition to two primary overall corrections, usually not enough for a typical commercial job. You can also only store 4 different versions per shot.

The HSL key. Color, like all grading platforms, contains a hue-saturation-luminance qualifier. I actually really liked how it pulled keys as it softened out the edges nicely, as opposed to Resolve which starts with a harder edge. The thing with Color’s qualifiers was, I would always adjust the keys by control-clicking on each side of the parameter, in effect only changing one side, giving me control at both ends. Since Resolve works in a nodal way, a preliminary balance of the shot before pulling a key ends up with better keying results. In Color, the keys are always pulled from the unbalanced source image, so if you had a shot with a nasty DSLR orange color cast that you wanted to get rid of, it would be much harder to extract a good skintone key from it, even if you had performed a preliminary balance on it first.

A basic skintone key. If I didn’t want to alter the left side of the frame I could use a window to matte them out.

Outputting. Color necessarily must output to a filename that reads like “1_g1.mov,” corresponding to the shot and grade number. This created problems in the past when working with graphic artists who liked to receive Quicktimes that reference the original filename they’ve been working on. It is also nearly impossible to work with Flame or Smoke artists who prefer DPX image sequences. Roundtripping back to Final Cut was also frequently buggy, with inaccurate frames and speed changes misinterpreted. Forget about modifying your XML and getting it back to Final Cut without issues. Color also cannot work with the Scarlet and Epic cameras. Resolve outputs to more formats than Color does, including Avid codecs, and can organize outputting to folders. I have experienced less issues with roundtripping back to Final Cut and Avid, even when dealing with speed ramp effects.

Resolve allows you to render to a large variety of formats, including Avid’s Dnxhd codecs, shown here.

Not many render options are supported, and files are rendered as “1_g2″, where “1″ is the numbered shot in the timeline, and “2″ is the grade number. This workflow is difficult to work with when working with graphic artists.

Those are just some of the huge differences between the widening gap of features that are present in Resolve and absent from Color. Using Resolve is less about imposing what I’m comfortable and faster with, and more about having the right tools for the right job. I’ve been in situations before where the job was underestimated and Color was forced on me, only to have the client demands go beyond what the program was capable of, but what would have been simple in more expensive DI rooms. The whole point is that smaller shops want to compete with the big boys, but need to realize that you’re not going to stand a chance with a program that is, let’s face it, considered an abortion.

I actually think Color would have been a really great program if Apple chose to develop it further. DaVinci definitely has had a head start as an industry standard, and many of the above features have taken time to develop. The Blackmagic team is insanely fast with updates, from quickly implementing a 3-way color corrector for those working without a control surface to lifting the $500 Avid tax to work natively with DNxHD footage. Apple is much more ambiguous as to where it stands with its pro market.

Helping clients understand the value of a Resolve system is a task that I don’t mind falling onto my shoulders as someone who carries specific knowledge about this niche in the industry. In fact, so many of them have been burned by Color’s inadequacies that I believe they are already predisposed to wanting something better. The number of rooms running Color are slowly disappearing, opening the market to much more robust color grading systems to forge ahead.

Don’t Judge a Book by its Trailer

This article was originally published in the International Business Times here.

the-flame-alphabet

The book trailer is just beginning to find its voice. As the name suggests, the book trailer is a video that promotes a book, much as a movie trailer generates hype for its feature film. Two years ago, a clip was released for Thomas Pynchon’s eagerly awaited Inherent Vice. In the trailer, notably voiced by the reclusive author himself, shaky video footage of barren California beaches and a soundtrack reminiscent of Pink Floyd gives the narration a nostalgic feel. At the end, Pynchon all but tells you to stop watching: Maybe you’ll just want to read the book, before balking at its retail price.

Or take the recent trailer supporting Ben Marcus’ newest release, The Flame Alphabet. If you didn’t know the trailer was pushing a book, you might think the unsettling animation was for Contagion. The video is eerily effective, and will no doubt help in digital as well as physical sales of the book. The use of voiceover, regarded as something of a narrative crutch in feature films, is at home here.

Marcus met Erin Cosgrove, the animator of his book trailer, through Creative Capital, an organization that funds artists. Though he thinks it sad that books now need a visual component to make them more appealing, he at least advocates for something more artistic. It’s become clear that advertorials with large text floating through the sky don’t work, he says. A trailer is more of an oblique sidecar in that it is not explicit praise.

The Flame Alphabet shows there is a chance to create something that can stand as its own artistic work.

Though the book trailer has been around for nearly a decade, the form has not really gained a lot of traction. But readers are experiencing novels in, well, novel ways. As sales in e-books continue to rise and printed book sales inversely decline, we will likely see a resurgence in the form. The trailer will especially be a key component in hooking younger readers.

Cary Murnion, who runs the creative agency Honest, approaches trailers from a cinematic standpoint. We think trailers work the same way as in the movies: We don’t want to spell out the narrative, but give the reader a sense of the narrative and themes of the book.

Murnion’s team is responsible for a host of trailers, including a series for Michael Crichton’s novel Next and Chuck Palahniuk’s Snuff, and recently a video game-inspired clip for Ready Player One.

“Sometimes we do a series of trailers, where each one is almost a puzzle piece to the book,” Murnion said. “We don’t give away too much; we just want the viewer to get the feeling of the book.”

Jeff Yamaguchi of Knopf Doubleday, which produced The Flame Alphabet trailer, worked with Murnion on the Next campaign in 2006 when YouTube was just starting to take off.

“The way the Web works, it’s very fast-moving. Video is a nice way to work into that fray. I think what will evolve is having a lot of video,” Yamaguchi said. “You can’t just do one. With Ben Marcus, we have that one awesome video [produced by Erin Cosgrove.] But you can’t ask her to make four more of those. So we also have an interview with Ben and we filmed him giving him some writing advice.”

Yamaguchi is talking about Knopf’s Writers on Writing series. Though these videos are not trailers, they’re more media that fuels the publisher to push the novel, allowing for a greater connection with readers, a term he calls feeding the beast.

Professional video editors could soon add publishing editors to their list of clients as they’re asked to cut trailers for books much as they are for feature films. Robert Ludlum in the style of Jerry Bruckheimer, a Stephen King book cut like Hostel. What would a Don Delillo trailer be like? Jeffrey Eugenides? Murakami? Capturing the voices of these authors in a video might seem like a fool’s errand, and it doesn’t mean publishers won’t try, but it is wise to tread carefully.

With books being turned into movies, a lot of them already have a cinematic feel, so they lend themselves to a trailer, Murnion says. But one of our jobs is not to force anything. We read the book first always, and we advise the client on the best way to approach the trailer, or just how to launch the book. It has to fit with the content.

The relationship between books and movies has existed as long as film has, and is never more evident than when there is a tie-in, as in last year’s Ryan Gosling vehicle Drive. A book cover tempts the casual browser in a store, but video, a medium built for the Internet, will entice the online shopper.

Authors and publishers have started to realize that video is a pretty powerful promotional tool. Authors aren’t selling out, they’re entering the playing field. Much like the very experience of reading a book and watching its film adaptation are wholly different, the literary trailer is a means of generating enthusiasm for the book without usurping it.

Murnion is optimistic, “I think trailers will just get better. The audience we attract, the more used to trailers they are, the more chances we can take to push the creative.”

Yamaguchi adds, “Something I’m hopeful for, people see it less as commercials, and more as a creative short film. That’s what people want to see, that’s the kind of thing that gets shared. And the medium of video online, there’s no shortage of it, but boy is there a huge opportunity to do amazing stuff.”

The ACES Color Space: The Gamut to End All Gamuts

This article was originally published in the International Business Times here.

aces-space

A development in the field of color science called ACES will let video professionals work with footage less destructively at the post-production end of the pipeline. This has immediate ramifications for shooters, colorists and visual effects houses and will be accomplished by utilizing a much wider gamut than the current specification for high-definition video.

First, some context. What the heck is a gamut anyway?

A gamut is simply the range of colors that a device can display. Displays are based on a three-color primary system of red, green and blue. In the image that accompanies this article you’ll see a chromaticity diagram, a visual blob that represents the entire range of colors that the human eye can see.

It gets its shape from the numbers running along the sides, which correspond to that color’s wavelength. The triangle inside this blob represents the result of our three-color system. The colors inside that triangle can be displayed by a device using that specification, in this case the HD video standard. (D65 is a fancy way of saying, here’s where white is.)

As one can plainly see, there are a lot of colors the display cannot see, green for example, but a lot of those values are pretty similar once you get near the edges of the graph. Still, video professionals will want the ability to show the full range of color data as footage acquisition becomes more sophisticated each year. The push toward affordable 4k displays will demand the utmost pristine picture quality.

I sat in on a lecture given by Michael Chenery, the senior color scientist at THX.  Chenery’s lecture was informative enough to give you a nosebleed; there’s no doubt the guy knows his stuff, and you could easily listen to the lecture several times and glean new information from it each time.

In the lecture he explained that one such answer to limited gamuts is the ACES (Academy Color Encoding Specification) color space, which uses imaginary primary colors to include the entire range of colors shown in the diagram. Imagine a triangular lasso that ensnares all of the values on the blob. To do that the colors that define the three-color triangle would have to be outside the colors humans can see; the colors are imaginary in that sense, but mathematically able to be utilized.

Imaginary colors may sound like science fiction, but this is only the tip of the iceberg for ACES. Monitors that can display this color space are slowly being utilized in post-production, as artists can switch their systems to work in the higher resolution format of ACES, and then convert back to HD so that it can be shown as intended.

ACES will eventually replace the standard for HD as displays go beyond the size of native 1080p resolution, as even this year’s Consumer Electronics Show saw growing desire for 4k monitors.