A Color Grading Case Study of the New Music Video from Hip-Hop Artist Killer Mike

This article was originally published on film industry resource No Film School here.

Even with the growing prominence of cheaper color correction systems, the craft of color grading is still mysterious to many, including those who work in post-production. I’m often asked how I approach specific projects or how I achieve particular looks so I thought it would be helpful to illustrate some of my methodologies with a music video for the rap artist Killer Mike. Beyond nerding out on Resolve, I hope the reader will start to see that there is a lot that happens outside of the software.

Here is the music video. There is some NSFW language:

I had worked with director Ben Dickinson and cinematographer Adam Newport-Berra on several projects by the time we worked on this one, so I already knew somewhat how they worked. A positive pre-existing relationship can help the work tremendously because aesthetically you are aware of where the other creatives may want to take things, where their sensibilities lie, and what elements in the frame they are likely to be drawn to. Ben and Adam don’t just rely on one signature style, but approach the grade from an angle that makes sense to that specific project. So they don’t just always have super-crushed blacks or popped colors.


The facilities at Pleasant Post also provided a great environment in which to grade, particularly because their color suite houses an OLED monitor, which was a huge help especially on this video as the monitor displayed beautiful blacks, enabling me to dial in subtle details from the black chasm the talent inhabits. Due to the large amount of Red work Pleasant edits and grades, they invested in a Red Rocket so render times didn’t take an eternity.

I always ask for a rough cut before I step onto a project not only to begin thinking about the appropriate palette but also to diagnose any potential issues before they occur, from separating graphical plates, to multiple camera formats, to seeing if the client has an unrealistic expectation of what can be achieved (for example, bringing back overexposed skintones or changing an element to a radically different color). When I received the rough cut I was excited as it was a very different type of rap video, and care had been given in the mise-en-scene.

The look’s general direction consisted of desaturation with warmth, a nod to Renaissance paintings.

We scheduled four hours to grade on the day of the session, which I thought would be enough time as the edit was not very complicated and some of the shots were repeated setups, though I still needed to check these shots to make sure an exposure or color change hadn’t occurred across their duration. Ben and Adam needed to shoot a pickup shot (the opening shot with the hourglass) so we discussed the grade beforehand and let me work while they got the shot. They had a ton of references organized into a PDF that they had culled from Renaissance paintings which we used as a conversational starting point. They were drawn to paintings that popped one color element, usually someone’s clothing, as a way to guide the eye. Whenever possible I made a note to achieve this (one good example is the black Madonna with her child). Blacks were to be black, as opposed to a tinted black, and not overly crushed. The image was to have a golden look to mimic the era’s palette, with relative desaturation to give it a more natural feel. Skintones can actually be very desaturated without feeling processed. It just depends on the creative’s sensibilities.

I actually prefer to have the edit project instead of a XML generated from the project as many editors are not informed on how to prep timelines adequately for Resolve beyond just getting all the clips down to one video track. There is actually much more to the way I prep the timeline to help me later in the session. On the timeline I can easily diagnose things I know are going to trip up Resolve, such as multiclip camera angles or odd effects that were left on the clips. I can get rid of edit points in the same clips that were added as part of the editor’s process. I can also see which shots are blown up, sped up, or flopped. I use the multiple video layer capability of XMLs as a way to flag clips to myself visually. For example, sometimes I put every shot that is blown up on the second video layer, or on a multiple-format project, I may opt to put Alexa footage on V1, Red footage on V2, and 5D/7D footage on V3. This is an easy way to answer the client’s question of what format the current shot you’re working on was shot on and to know which shots need resizing.

It’s also from the project file that I create my reference picture to make sure my XML came in correctly. Sometimes editors make H264s that lose timecode, making me go into the edit program to reexport anyway.

I loaded all the raw Red footage manually which I requested to have on the drive, and loaded the XML without selecting “automatically import source clips into media pool” as the video was cut using Prores Quicktimes, the most common Red-Final Cut workflow.

Resolve has the ability to link a rough cut with your XML after you bring it in. After loading my session the first thing I do after loading the footage in is to load the rough cut and compare all the shots. Even with cinestyle cameras such as the Reds and the Alexa which contain timecode and metadata, for some reason during this project some clips would be one clip off from the correct shot. Using the Final Cut project that was given to me made it easy to find the correct shot and force conform with the correct clip. This is an essential preparatory step as there is nothing worse than loading a cut with the wrong shot and having the client call you out on it. Neglecting this step is an immediate way to lose credibility.

Resolve automatically groups shots that share the same dailies together so you only need to grade the shot once for it to apply to all of those setups. I went through and grouped other shots that were clearly the same setup, but just not part of the same dailies to make this process even more autonomous.

On to the actual grading. I began by crushing the blacks in each shot and building in a healthy bit of contrast, not spending more than thirty seconds on each shot to accomplish an initial correction before moving on to the next shot. I really had no idea when Ben and Adam would get to the session but it would be best to be able to show them a work-in-progress of most of the cut as opposed to a finely-tuned first shot that they might change completely.

The black Madonna’s red tunic was hue-shifted to get exactly the right tone of red the creatives wanted. Vignettes were added to the actress and baby to bring out their faces.

Most of the images in the video ended up far below the upper legal limit of the scopes, probably around 80 IRE, resulting in images that were not too bright but by no means were dim either. I desaturated the shots while pumping in a golden yellow mostly in the mids to add an old-world feel to the overall image.

I quickly realized that the guest artist’s skintone was very different than Killer Mike’s. I inched them closer together and made a mental note that I’d have to do that throughout the whole video, particularly in shots where they both appeared.

Some shots featured extreme relighting, like this one of Killer Mike’s head on a plate. It is normal in a session for the client to keep refining the image into double-digit nodes, and Resolve is great at keeping up with the speed of the creatives. Vignettes were added to affect multiple portions of the frame, and in node #7 a vignette was used to darken the bottom of the shot to give the sense that Mike’s head was severed from his body.

I continued blasting through the setups, trying to accent at least one element in each scene, whether it was the subtleties in the clothing or the heavenly halos around the artists. When the creatives arrived they were happy with the progress thus far, and made some aesthetic changes to intentionally leave certain shots less consistent with each other. We spent time putting the Malcolm X setups into their own world, and let the closeups of both artists edge into a warmer, more saturated territory to accent their candlelit faces.

We let these setups go a little warmer to draw attention to the candlelight that clarifies the talent’s faces.

Whenever possible we warmed up the smoke by adding vignettes as I knew they wouldn’t separate cleanly with secondaries. We directed the viewer’s eye by accentuating the shafts of light in the baseball bat scene and reframed certain shots to add to the story, for example centering Mike in frame when his head was on a platter, or the very last shot where he is meant to be killing himself which was shrunken.

I used vignettes to accentuate the lighting that was already there, but did not come through as pronounced as what the director and DP intended. Here, a heavy vignette frames the shot to create the sense that a higher presence is watching Mike.

The project was rendered at 1080p and round-tripped back to Final Cut via XML where Ben applied further effects before shipping the video for an exclusive premiere on Pitchfork.

A personal favorite shot of mine occurs during a cut from a wider shot of Scar to a closer one. In the closeup, I qualified his eyes with a secondary and brightened them, giving an ethereal, heightened feel to the shot, perhaps further unsettling due to the shot’s short duration.

A Colorist’s Perspective: Practical Comparisons of DaVinci Resolve and Apple Color

This article was originally published on film industry resource No Film School here.

With the release of Apple Color several years ago, the once-niche field of high-end color grading trickled down to the average user. When Blackmagic released DaVinci Resolve on Mac it became more obvious that color grading was the next big wave. Having already been grading professionally with Color shortly after it was released, I quickly decided to invest in a traveling DaVinci Resolve Mac Pro tower. The client demand for color grading in particular, and a traveling station specifically, has grown my business at a rate I never thought possible. Now, with Resolve 9 nearing its official, non-beta release, Blackmagic has separated itself even more from Apple’s killed product.

One of my biggest challenges outside of sessions is explaining the value of this system to new or potential clients. Most of these clients are still holding onto the Color program in a similar manner as some editors are to Final Cut 7, with an attitude of “if it’s not broke, don’t fix it.” Though one should be able to achieve the same grading results from both platforms, I do attest that my work is better in Resolve, but this can’t be easily measured. What can be assessed is the speed at which those results coalesce. As a working colorist frequently in a time crunch (let’s face it, every job), the features that allow me to shave a few seconds from a certain action add up in a big way even in a session that runs just a few hours. I want to highlight some of the biggest features that save me a lot of time in most sessions.

The tracker. When I demo Resolve, even seasoned graphics guys are stunned at just how well Resolve can track. This completely changes the way I grade, as I can key more aggressively and know that the keys can be constrained with a tracked matte. Resolve picks points automatically, which means that I rarely need to redo the track. The tracker in Resolve 9 has been improved even more, where you can select part of the track that messed up and retrack just that section, or manually modify it. I hardly used the tracker in Color, partly because it was a manual tracker and also because it was painfully slow. We’re talking 1 frame per second. I used it as a last-ditch effort, usually opting for keyframing instead in the interest of speed.

Resolve’s tracker automatically picks points for you after you stick a vignette on what you want to track.

Color uses a manual tracker which is often extremely slow.

The still store. With just a few clicks I can store and recall a still extremely quickly on my control surface, then wipe and reposition it to compare with the current shot I’m working on. This is a great way for clients to evaluate, say, a medium and wide shot to check for matching skintones. Color handles this with extreme clunkiness. The stills are located in a completely different room, and the transition wipe is frequently extremely slow, making it nearly impossible to use in a serious client session.

It’s incredibly easy to save stills and call them up immediately, and pan, tilt or zoom the images as needed so you can focus on matching specific parts of the scene.

Color stores its stills in a separate room, away from the coloring, forcing the user to toggle back and forth to call them up. Panning the shot you’re on requires heading to a different room as well. The “transition” slider here is what controls the wipe. For some reason, it tends to lag when using a control surface.

Nodes. Resolve’s corrections work as a set of nodes which can be arranged in serial or parallel. You can also easily adjust the mix on the nodes when the client asks you to “split the difference.” You’d be surprised how often that one comes up.

One of the big limitations of Color is that it limits you to 8 secondaries. For some jobs, this would be more than enough. But for a typical commercial job with a tweaker client, it’s simply not. Depending on the shot, some images frequently need a lot of keys pulled and vignettes added, but I also add nodes based on the manner in which a client makes requests. Let’s say they’re firing a bunch of commands at you. I sometimes opt to execute each small change in its own separate node, and then enable and disable those nodes to show them each small change. In this way they can evaluate the image in small increments, and if they don’t like the change, you simply delete the node. If the client likes the change, sometimes they’ll ask you to apply it to the sequence as a whole. Since you’re making small changes throughout, it’s easy to grab just the last node and apply it to the end of the node tree.

Compare that to doing a ton of things within a single node and then having to show the client by hitting undo and redo several times. It’s just less immediate for the client. The point is that you’re not conservatively worried about running out of nodes. Apple Color also only has one level of undo, whereas Resolve has multiple levels of undo for each shot, not just for the overall timeline, so I can tweak a medium shot, adjust a closeup, and then go back and undo the changes I made to the medium shot.

Recalling some shots from a previous job, the center shows my personal record for number of nodes for a single shot, (21!), as well as a “simpler” shot involving 15 nodes. I averaged 13 nodes per shot on this job.

Color can hold a maximum of eight secondaries per shot in addition to two primary overall corrections, usually not enough for a typical commercial job. You can also only store 4 different versions per shot.

The HSL key. Color, like all grading platforms, contains a hue-saturation-luminance qualifier. I actually really liked how it pulled keys as it softened out the edges nicely, as opposed to Resolve which starts with a harder edge. The thing with Color’s qualifiers was, I would always adjust the keys by control-clicking on each side of the parameter, in effect only changing one side, giving me control at both ends. Since Resolve works in a nodal way, a preliminary balance of the shot before pulling a key ends up with better keying results. In Color, the keys are always pulled from the unbalanced source image, so if you had a shot with a nasty DSLR orange color cast that you wanted to get rid of, it would be much harder to extract a good skintone key from it, even if you had performed a preliminary balance on it first.

A basic skintone key. If I didn’t want to alter the left side of the frame I could use a window to matte them out.

Outputting. Color necessarily must output to a filename that reads like “1_g1.mov,” corresponding to the shot and grade number. This created problems in the past when working with graphic artists who liked to receive Quicktimes that reference the original filename they’ve been working on. It is also nearly impossible to work with Flame or Smoke artists who prefer DPX image sequences. Roundtripping back to Final Cut was also frequently buggy, with inaccurate frames and speed changes misinterpreted. Forget about modifying your XML and getting it back to Final Cut without issues. Color also cannot work with the Scarlet and Epic cameras. Resolve outputs to more formats than Color does, including Avid codecs, and can organize outputting to folders. I have experienced less issues with roundtripping back to Final Cut and Avid, even when dealing with speed ramp effects.

Resolve allows you to render to a large variety of formats, including Avid’s Dnxhd codecs, shown here.

Not many render options are supported, and files are rendered as “1_g2″, where “1″ is the numbered shot in the timeline, and “2″ is the grade number. This workflow is difficult to work with when working with graphic artists.

Those are just some of the huge differences between the widening gap of features that are present in Resolve and absent from Color. Using Resolve is less about imposing what I’m comfortable and faster with, and more about having the right tools for the right job. I’ve been in situations before where the job was underestimated and Color was forced on me, only to have the client demands go beyond what the program was capable of, but what would have been simple in more expensive DI rooms. The whole point is that smaller shops want to compete with the big boys, but need to realize that you’re not going to stand a chance with a program that is, let’s face it, considered an abortion.

I actually think Color would have been a really great program if Apple chose to develop it further. DaVinci definitely has had a head start as an industry standard, and many of the above features have taken time to develop. The Blackmagic team is insanely fast with updates, from quickly implementing a 3-way color corrector for those working without a control surface to lifting the $500 Avid tax to work natively with DNxHD footage. Apple is much more ambiguous as to where it stands with its pro market.

Helping clients understand the value of a Resolve system is a task that I don’t mind falling onto my shoulders as someone who carries specific knowledge about this niche in the industry. In fact, so many of them have been burned by Color’s inadequacies that I believe they are already predisposed to wanting something better. The number of rooms running Color are slowly disappearing, opening the market to much more robust color grading systems to forge ahead.

Don’t Judge a Book by its Trailer

This article was originally published in the International Business Times here.

the-flame-alphabet

The book trailer is just beginning to find its voice. As the name suggests, the book trailer is a video that promotes a book, much as a movie trailer generates hype for its feature film. Two years ago, a clip was released for Thomas Pynchon’s eagerly awaited Inherent Vice. In the trailer, notably voiced by the reclusive author himself, shaky video footage of barren California beaches and a soundtrack reminiscent of Pink Floyd gives the narration a nostalgic feel. At the end, Pynchon all but tells you to stop watching: Maybe you’ll just want to read the book, before balking at its retail price.

Or take the recent trailer supporting Ben Marcus’ newest release, The Flame Alphabet. If you didn’t know the trailer was pushing a book, you might think the unsettling animation was for Contagion. The video is eerily effective, and will no doubt help in digital as well as physical sales of the book. The use of voiceover, regarded as something of a narrative crutch in feature films, is at home here.

Marcus met Erin Cosgrove, the animator of his book trailer, through Creative Capital, an organization that funds artists. Though he thinks it sad that books now need a visual component to make them more appealing, he at least advocates for something more artistic. It’s become clear that advertorials with large text floating through the sky don’t work, he says. A trailer is more of an oblique sidecar in that it is not explicit praise.

The Flame Alphabet shows there is a chance to create something that can stand as its own artistic work.

Though the book trailer has been around for nearly a decade, the form has not really gained a lot of traction. But readers are experiencing novels in, well, novel ways. As sales in e-books continue to rise and printed book sales inversely decline, we will likely see a resurgence in the form. The trailer will especially be a key component in hooking younger readers.

Cary Murnion, who runs the creative agency Honest, approaches trailers from a cinematic standpoint. We think trailers work the same way as in the movies: We don’t want to spell out the narrative, but give the reader a sense of the narrative and themes of the book.

Murnion’s team is responsible for a host of trailers, including a series for Michael Crichton’s novel Next and Chuck Palahniuk’s Snuff, and recently a video game-inspired clip for Ready Player One.

“Sometimes we do a series of trailers, where each one is almost a puzzle piece to the book,” Murnion said. “We don’t give away too much; we just want the viewer to get the feeling of the book.”

Jeff Yamaguchi of Knopf Doubleday, which produced The Flame Alphabet trailer, worked with Murnion on the Next campaign in 2006 when YouTube was just starting to take off.

“The way the Web works, it’s very fast-moving. Video is a nice way to work into that fray. I think what will evolve is having a lot of video,” Yamaguchi said. “You can’t just do one. With Ben Marcus, we have that one awesome video [produced by Erin Cosgrove.] But you can’t ask her to make four more of those. So we also have an interview with Ben and we filmed him giving him some writing advice.”

Yamaguchi is talking about Knopf’s Writers on Writing series. Though these videos are not trailers, they’re more media that fuels the publisher to push the novel, allowing for a greater connection with readers, a term he calls feeding the beast.

Professional video editors could soon add publishing editors to their list of clients as they’re asked to cut trailers for books much as they are for feature films. Robert Ludlum in the style of Jerry Bruckheimer, a Stephen King book cut like Hostel. What would a Don Delillo trailer be like? Jeffrey Eugenides? Murakami? Capturing the voices of these authors in a video might seem like a fool’s errand, and it doesn’t mean publishers won’t try, but it is wise to tread carefully.

With books being turned into movies, a lot of them already have a cinematic feel, so they lend themselves to a trailer, Murnion says. But one of our jobs is not to force anything. We read the book first always, and we advise the client on the best way to approach the trailer, or just how to launch the book. It has to fit with the content.

The relationship between books and movies has existed as long as film has, and is never more evident than when there is a tie-in, as in last year’s Ryan Gosling vehicle Drive. A book cover tempts the casual browser in a store, but video, a medium built for the Internet, will entice the online shopper.

Authors and publishers have started to realize that video is a pretty powerful promotional tool. Authors aren’t selling out, they’re entering the playing field. Much like the very experience of reading a book and watching its film adaptation are wholly different, the literary trailer is a means of generating enthusiasm for the book without usurping it.

Murnion is optimistic, “I think trailers will just get better. The audience we attract, the more used to trailers they are, the more chances we can take to push the creative.”

Yamaguchi adds, “Something I’m hopeful for, people see it less as commercials, and more as a creative short film. That’s what people want to see, that’s the kind of thing that gets shared. And the medium of video online, there’s no shortage of it, but boy is there a huge opportunity to do amazing stuff.”

The ACES Color Space: The Gamut to End All Gamuts

This article was originally published in the International Business Times here.

aces-space

A development in the field of color science called ACES will let video professionals work with footage less destructively at the post-production end of the pipeline. This has immediate ramifications for shooters, colorists and visual effects houses and will be accomplished by utilizing a much wider gamut than the current specification for high-definition video.

First, some context. What the heck is a gamut anyway?

A gamut is simply the range of colors that a device can display. Displays are based on a three-color primary system of red, green and blue. In the image that accompanies this article you’ll see a chromaticity diagram, a visual blob that represents the entire range of colors that the human eye can see.

It gets its shape from the numbers running along the sides, which correspond to that color’s wavelength. The triangle inside this blob represents the result of our three-color system. The colors inside that triangle can be displayed by a device using that specification, in this case the HD video standard. (D65 is a fancy way of saying, here’s where white is.)

As one can plainly see, there are a lot of colors the display cannot see, green for example, but a lot of those values are pretty similar once you get near the edges of the graph. Still, video professionals will want the ability to show the full range of color data as footage acquisition becomes more sophisticated each year. The push toward affordable 4k displays will demand the utmost pristine picture quality.

I sat in on a lecture given by Michael Chenery, the senior color scientist at THX.  Chenery’s lecture was informative enough to give you a nosebleed; there’s no doubt the guy knows his stuff, and you could easily listen to the lecture several times and glean new information from it each time.

In the lecture he explained that one such answer to limited gamuts is the ACES (Academy Color Encoding Specification) color space, which uses imaginary primary colors to include the entire range of colors shown in the diagram. Imagine a triangular lasso that ensnares all of the values on the blob. To do that the colors that define the three-color triangle would have to be outside the colors humans can see; the colors are imaginary in that sense, but mathematically able to be utilized.

Imaginary colors may sound like science fiction, but this is only the tip of the iceberg for ACES. Monitors that can display this color space are slowly being utilized in post-production, as artists can switch their systems to work in the higher resolution format of ACES, and then convert back to HD so that it can be shown as intended.

ACES will eventually replace the standard for HD as displays go beyond the size of native 1080p resolution, as even this year’s Consumer Electronics Show saw growing desire for 4k monitors.