Deep Compositing

Open EXR 2.0

Great to hear that Open EXR 2.0 was released yesterday.  From the press release:

  1. Deep Data support - Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows.
  2. Multi-part Image Files - With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don't need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
  3. Optimized pixel reading - decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.
  4. Namespacing - The library introduces versioned namespaces to avoid conflicts between packages compiled with different versions of the library.

I've been looking forward to this because of numbers 1 and 2 on that list.  

A big reason why the studios I've worked at haven't adopted multi-channel EXRs is because all the channels are sort of interconnected with each other.  If you want to read the diffuse channel, it would have to read twenty other channels before it could display it, so you took a pretty big performance hit.  By making them multi-part, that means that you only read the layer you'd be calling on, which should speed things up a great deal.

It also means that Deep Compositing will soon be available to everyone, not just PRMan users.  I believe most of the renderers were just waiting for the EXR 2.0 standard to be published, so they all had a consistent way of writing the data out.

I'm very interested in what the 'Optimized pixel reading' will mean in real world situations.  Anything that speeds up I/O is very welcome.

A bit confusingly, the press release also says:
The Foundry has build OpenEXR 2.0 support into its Nuke Compositing application as the base for the Deep Compositing workflows.
Does that mean that it's already included in Nuke?