compositing

Open EXR 2.0

Great to hear that Open EXR 2.0 was released yesterday.  From the press release:

  1. Deep Data support - Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows.
  2. Multi-part Image Files - With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don't need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
  3. Optimized pixel reading - decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.
  4. Namespacing - The library introduces versioned namespaces to avoid conflicts between packages compiled with different versions of the library.

I've been looking forward to this because of numbers 1 and 2 on that list.  

A big reason why the studios I've worked at haven't adopted multi-channel EXRs is because all the channels are sort of interconnected with each other.  If you want to read the diffuse channel, it would have to read twenty other channels before it could display it, so you took a pretty big performance hit.  By making them multi-part, that means that you only read the layer you'd be calling on, which should speed things up a great deal.

It also means that Deep Compositing will soon be available to everyone, not just PRMan users.  I believe most of the renderers were just waiting for the EXR 2.0 standard to be published, so they all had a consistent way of writing the data out.

I'm very interested in what the 'Optimized pixel reading' will mean in real world situations.  Anything that speeds up I/O is very welcome.

A bit confusingly, the press release also says:
The Foundry has build OpenEXR 2.0 support into its Nuke Compositing application as the base for the Deep Compositing workflows.
Does that mean that it's already included in Nuke?

New Tutorial Lens Distortion in Nuke

Lens Distortion Tutorial

Lens Distortion Tutorial

I put out a new tutorial today, all about lens distortion in Nuke.  There are lot of tutorials about how to use the Lens Distortion node out on the internet already, but this one (I hope) is a bit different.  The main focus is not how to take out or add in distortion, but more of a look at the overall workflow.  When do you apply distortion, how do you add it back in and how would you apply this to your renders are some of the questions I try to answer.

You can check it out here.

Colour Correction

When I started compositing, colour correction seemed like a black art.  I would push and pull different controls until I would eventually get something that looked somewhat like what I wanted.  When I had spare time, I would open other artists’ scripts and I’d marvel at how they would colour correct a shot.  Every artists' technique was different - some would exclusively use curves.  Others would use levels/histograms, while others would use numeric inputs.  Looking at other artists' scripts would allow me to undertand their technique, but I was still clueless about their thought process - about why they did what they did. 

I fumbled around with colour correction over my first few years of compositing.  The first step towards understanding was reading Steve Wright and later, Ron Brinkmann’s compositing books.  They explained basic things like matching black and white levels and checking different colour channels. These details now seem obvious. However, at the time, gaining this understanding was like lifting a heavy vale from my eyes.

Understanding the different tools took time.  I had to learn that ‘gain’ really meant ‘multiply’, and ‘offset’ meant ‘add‘. I had to learn that there was a difference between these controls and between their impact on images. It took me some time to grasp that when you multiply images highlights are affected more than shadows.  Colour correction was one of those things that just took practice.  It was here that the ‘compositing is really visual math’ really started to click for me. 

Even now, whenever I submit a shot for review, I receive feedback about a necessary colour correction. There is definitely a certain level of subjectivity to colour correction: supervisors often have a different idea than I on how a shot should look.  However, I’m now usually much closer to the target than I used to be, and it only takes one or two iterations before everyone is satisfied.


These books contain great advice about how to approach colour correction.  Wright and Brinkmanns’ books are software independent, while Christiansen’s book is specific to After Effects.

Colour correction resources:

Steve Wright

After Effects Studio Techniques

Visual Math

“Joe, compositing is like visual math.”
I think my colleague meant that to sound comforting. Instead, his words sent a chill down my spine because I really wasn’t very good at math. It was at this moment that I realized that I sucked as a compositor.  The year was 2003.  I was in Quebec, working on Spy Kids 3D at Hybride Technologies.

Let me back up a bit.  I had graduated from a media arts program from Sheridan College a few years earlier.  In that program, which was mostly all about broadcasting and film production, I took an intro to visual effects course that consisted of basic After Effects compositing.  From there I had managed to get an internship at a local post house in Toronto, and then at a small visual effects shop.  After working on some low budget TV movies, work had dried up (at least for me) in Toronto, so I went off to Quebec, where I was  hired as a compositor for Spy Kids 3D.

At the time, I thought I really knew my stuff.  I was at that point in my learning curve where I was so ignorant, I didn’t know what I didn’t know.

Hybride at the time was a major Discreet house.  They had a close relationship with Discreet, and had several Infernos and Flames.  For those of you unfamiliar with Inferno and Flame, they were (and still are) dedicated compositing systems, that were extremely expensive.  Although desktop programs like Shake were becoming more and more prevalent in FX studios, the name “Inferno” still carried a lot of mystique and weight.

At the Toronto effects shop where I had worked, they had one Inferno with two Flames and two Flints.  At Hybride they had something like six Infernos and six more Flames.  It blew my mind that they had so much stuff.

They also had several compers who had worked at Discreet, so they knew those systems inside and out.  They were compositing ninjas.  They would talk about compositing at a much higher level than what I knew.  Things like ‘you should divide your RGB by your alpha before you color correct’ sounded like an alien language.  I was a caveman, rubbing two sticks together while they had flamethrowers.

My supervisors quickly realized how little I knew, so I was put on prep duty for most of the show.  But from time to time, some of the senior compers would show me what they were doing.  That was when I was told ‘Digital compositing is like visual math’,  which just freaked me out more than I already was.  They would show me stuff and most of it flew over my head, but it greatly opened my mind. 

I left Quebec when Spy Kids was over.  When I arrived, I thought I knew a lot about compositing, but I really knew very little.  Before Quebec, when a technical issue would pop up in one of my shots, I would flail around trying every button, redoing things from scratch, anything I could do to solve the problem. 

At Hybride, I learned that things happen inside a comp for a reason.  Compositing isn’t magic, it’s a system based on math (sometimes surprisingly basic math).  I saw compositors who understood why things happen - they weren’t just button pushers.  They were able to elegantly solve problems that would have had me hack together clumsy solutions.  More importantly, I saw them approach things logically, avoiding many of the problems that I often ran into.

Before I left for Quebec, I had ordered the Steve Wright book, Digital Compositing for Film and Video.  I had read the book before I left for my trip to Hybride, and I only understood about a third of it.  When I came back, I re-read it.  I don’t want to overstate this, but the second reading was a personal turning point.  It was like a bubble had popped, or a gear had turned in my brain.  The exposure to what I saw at Hybride, combined with Wright’s book, gave me a new understanding of compositing.  Not only did I now totally understand Wright’s book, but thinking back, I now understood what those senior compositors were trying to explain to me.
    
This newfound understanding was a big deal.  It fundamentally changed how I approached shots, especially when it came to keying and color correction.  It also gave me a boost in self-esteem.  The entire experience in Quebec left me doubting myself for months.  I now felt that maybe I could really wrap my head around compositing, and who knows, maybe one day be as good as those Hybride ninjas.