VFX

New Tutorial-Nuke Basics:Shuffle and Shuffle Copy

I put out a new tutorial today, how to use Shuffle and the ShuffleCopy node in Nuke.  I often see beginning compositors having issues with this node because of the confusing (at first) UI, but once you start using it, it's pretty easy to wrap your head around it.  You can check it out here:

Open EXR 2.0

Great to hear that Open EXR 2.0 was released yesterday.  From the press release:

  1. Deep Data support - Pixels can now store a variable-length list of samples. The main rationale behind deep images is to enable the storage of multiple values at different depths for each pixel. OpenEXR 2.0 supports both hard-surface and volumetric representations for Deep Compositing workflows.
  2. Multi-part Image Files - With OpenEXR 2.0, files can now contain a number of separate, but related, data parts in one file. Access to any part is independent of the others, pixels from parts that are not required in the current operation don't need to be accessed, resulting in quicker read times when accessing only a subset of channels. The multipart interface also incorporates support for Stereo images where views are stored in separate parts. This makes stereo OpenEXR 2.0 files significantly faster to work with than the previous multiview support in OpenEXR.
  3. Optimized pixel reading - decoding RGB(A) scanline images has been accelerated on SSE processors providing a significant speedup when reading both old and new format images, including multipart and multiview files.
  4. Namespacing - The library introduces versioned namespaces to avoid conflicts between packages compiled with different versions of the library.

I've been looking forward to this because of numbers 1 and 2 on that list.  

A big reason why the studios I've worked at haven't adopted multi-channel EXRs is because all the channels are sort of interconnected with each other.  If you want to read the diffuse channel, it would have to read twenty other channels before it could display it, so you took a pretty big performance hit.  By making them multi-part, that means that you only read the layer you'd be calling on, which should speed things up a great deal.

It also means that Deep Compositing will soon be available to everyone, not just PRMan users.  I believe most of the renderers were just waiting for the EXR 2.0 standard to be published, so they all had a consistent way of writing the data out.

I'm very interested in what the 'Optimized pixel reading' will mean in real world situations.  Anything that speeds up I/O is very welcome.

A bit confusingly, the press release also says:
The Foundry has build OpenEXR 2.0 support into its Nuke Compositing application as the base for the Deep Compositing workflows.
Does that mean that it's already included in Nuke?

Nuke news from the FXGuide live stream

FX Guide is running an awesome live stream from the floor of NAB.  First thing they showed was a demonstration of a Nuke-Hiero workflow.  Some of my notes:

  • No timelines on any of this, it's a tech demo.
  • Jon Wadelton stared in Hiero
  • Embedded Nuke in Hiero.  This was unbelievable, basically you had Hiero's timeline in Nuke (or vice-versa).
  • Drag and drop footage from Hiero bins to Nuke DAG
  • No rendering, a live link from one to the other.  Make a change in Nuke and see it right away in the Hiero timeline.
  • It looks like one program, the Nuke DAG is just a tab in Hiero, tight integration.
  • You can now type text in viewer.
  • There are new color pickers.
  • There are now per-character text controls.
  • Switch versions of Nuke comp in Hiero.  So you could be looking at v001 and then switch to the v002 version of a Nuke comp, all in the Hiero timeline.
  • Nuke comps are just external comps, not bundled in a weird Hiero file.
  • fTrack-asset management built in
  • Make notes in Hiero, can see notes in Nuke, all web based, so would be idea for remote workers.
  • New color matcher tool
  • UV editor ModelBuilder
  • Color tools improved in general-UI polishing
  • working on vectorscopes
  • Houdini and VRay are working on being able to render deep image

You can check the live feed at FXGuide.

NAB Nuke news slowly leaking out

If you follow the right people on Twitter, you may have found out some interesting Nuke news yesterday.  Twitter user NEO_AMIGA (Henrik Cednert) had some great info:

Brad Peebler posted a video of the Nuke/modo/Hiero integration here:

Very interesting times, especially considering the Adobe/C4D announcement last week. This sort of deep integration is exactly what I was hoping for, I'm crossing my fingers that the modo renderer will be included in Nuke...  FXGuide is doing a ton of live broadcasting from NAB today, here's hoping that more details leak out (with release dates).