Sat 21st Jan 2012, by Paul Hellard | Production
Weta Digital is on a roll this year, producing an impressive variety of films. One of those films, The Adventures of Tintin leads the pack in this year’s VES nominations. Here CGSociety discusses some of the factors with lighting such a gorgeous project.
Tintin was one of the more difficult characters to work with on the show. “The difficulty of working with Tintin is he’s such a simply constructed character,” said Miller. “He’s essentially an oval with two dots for eyes and there’s not a lot to work with. We spent years working through iteration after iteration trying to bring the character to life in a way that stayed true to Hergé’s representation but still make him visually interesting and easy to light. We went through hundreds of iterations on the Tintin character just exploring.” The reason is because it’s difficult to light such a simple shape that has no surface variations. Since the Tintin character was a young boy, he didn’t have wrinkles or scars or other interesting details to draw the eye such as Captain Haddock or the Thompson twins, who had interesting facial features to work with such as big noses and jowls. The solution required enhancements to Tintin’s facial features, jaw and cheek bones.
The sea plane through the storm sequence had some specific lighting challenges too. Weta developed a tool to generate lightning geometry but lightning bolts still had to integrate into those clouds. “We studied a lot of reference,” said Miller. “There is some very interesting play between the lightning bolts and the illumination they throw and the clouds. Getting that lighting to feel like it was embedded in the clouds was a bit challenging.” To accomplish this, Weta added some new features to their in-house cloud software, a volumetric modeling and rendering system that is integrated into the pipeline through both Maya and Renderman. “It’s intuitive, with interactive lighting preview and deep alpha compositing so the TD’s can get faster iterations.”
The introduction of fast approximate multiple scattering was another ability recently added to Weta’s bag of tricks to sufficiently embed the lightning and interaction with the clouds. Software Developer Antoine Bouthors added support for fast approximate multiple scattering (not true multiple scattering) achieved by supplying multiple single scattering algorithms with different parameters. “This approach allows you to get most of the way towards true multiple scattering with very little computation” explained Miller. “We also generated caches directly from our lightning bolt geometry then used that to light the clouds. You get that hot core around the bolt and wide blooming effect. All that came together really nicely.” Weta used the cloud software on a variety of things, not just clouds; it was also used for fog, god rays, and torch beams.
Weta also made improvements to their indirect lighting pipeline, integrating multi-bounce indirect diffuse and introducing a new indirect specular pipeline, which allowed them to achieve approximate glossy reflections from their indirect lighting caches. Improvements to the Spherical Harmonics (SH) pipeline were both focused on how the data was used as well as how the data was generated and stored. That was due to the use of PantaRay for generating the SH. “We were able to run some of those harmonics passes an order of magnitude more quickly then we could on Avatar, which was important because we’re dealing with really large data sets.”
Coming off Avatar, Weta’s raytracer was already pretty fast and able to handle enormous amounts of geometry comfortably. What was missing was to add routines to compute shadow information and then come up with some clever notions how to store all this information so it was convenient to reuse during the rendering. PantaRay runs before the beauty pass (it’s a precomputation engine) and it needs to store a very large amount of data for all the shadows, which will then be read back in by the renderer during the beauty pass. Every light source stores its shadowing data on each vertex in the scene, creating caches that in many cases run in the multiple gigabytes per frame, and Weta needed to constantly shuffle these files about as part of their workflow. The algorithm is different from traditional shadow maps, as explained by Fascione: “If you have a traditional shadow map for a solid object, and compare it to a ray traced shadow, you are essentially talking about one ray per lit pixel. When you move to shadow maps for stacks of very thin or transparent things, such as fur or fog, you could move up to maybe ten to a hundred rays per pixel, depending on how deep your fur coat is, or how thick your fog gets.” This is a technology called Deep Shadows, which was invented for Monsters, Inc. at Pixar, and that’s been in use since for many movies in pretty much all studios.
“There is also another, different technique called Soft Shadows in which you compute a shadow map from each of the four corners of a standard area light, so that now you have four rays per lit pixel. You can naturally combine the two techniques to obtain a Soft Deep Shadow. Now you can compare this to what PantaRay has to do: fully raytrace all the shadows,” said Fascione. This means there are in the order of multiple hundreds to a few thousand rays per scene vertex, so the amount of work that needs to be carried over is ten to a hundred times more than what is commonly done, even with Soft Deep Shadows. Using PantaRay, Weta managed to do all this in about twice as much time of a heavy deep shadow render, and achieving even greater savings when running on GPGPU hardware in CUDA. “As we had the luxury of having a very specialized tool for doing exactly one very specific thing, we can optimize this down to the metal and make sure we’re driving it as hot as we can. It’s still slightly slower from a turnaround perspective, when compared to the old process, but it results in the essential improvement to the quality of light and shadows the supervisors were looking for to match the visual tone of the film.”
Some examples of where this is particularly noticeable was inside of Tintin’s apartment, a fairly simplistic box shaped room with windows on either end. “We put in some sheer net curtains and treated it as if you were on a sound stage and you were shooting an interior of an office,” said Stables. “You end up with big soft diffused area light sources coming through the window and get soft natural light. That was where, had we tried to do things with shadow maps, it would have all started to fall apart fairly quickly. If you want those big soft shadows you just have to start raytracing them.”
As always, the disc space size was a major consideration. A lot of shots were big drama shots that were hundreds of frames long. If you store all that shadow data for every light per frame and you put all of that onto one disc allocation, even with the processing power and the network configuration at Weta there are always risks of damage to the network. “We started allocating data per frame” said Stables. “So if you have a one thousand frame shot, the data for every frame was in a separate disc allocation, which meant it went off to different servers and balanced the network flow out. Because we were precomputing all this information and we were storing vast amounts of information, we really had to optimize our infrastructure to make that work.”
PantaRay was running on CPUs and GPUs, so for Avatar the CPU code pass was fast, but comparatively simple. “We managed to speed it up enormously towards the end of Tintin” said Fascione, “up to 20 to 50 times for the most favorable cases. Through the new generation of NVidia cards we also vastly improved our GPGPU code path speed, maybe five to ten times faster, which of course led to substantial savings. In practice it really meant that we would have a five to ten-fold increase in size on our scene side without too much of a slowdown for the shot turnaround times. As it’s most often the case, we tend to use the technology to improve the look of the movies, more so than turning the same thing around faster.”
“I think it’s really good that we are at a stage where it is practical for us to raytrace things,” said Stables. “Tintin was the most raytracing we had ever done on a film by far, and the fact we had technology like PantaRay that allowed us to do so is something we can build on in the future, whether its shadows or indirect lighting or anything else we want to raytrace and bounce rays around for. The ability to use PantaRay on Tintin to really push a lot of these ideas and this technology forward is fantastic for us.”
The comic book version of Snowy is basically an outline of a dog. “This is a dog-like cloud sort of thing,” said Revelant. “There is no shading inside the silhouette, and he’s very stylized. It was difficult to figure out where the fur started and where the dog finished, where the skin was. White fur is always tricky because you have a lot of scattering happening, a lot of problems with shaders that tend to be too deep and dark. It’s very difficult to get a nice sheen on white. The groom and the shader go together, even more than on dark hair or fur.”
Luckily, Barbershop allows the artist to work topology-independent. “We were able to put the fur on the character, see if the general proportions were correct, and if it wasn’t, try to shorten the fur or push in the skin,” said Revelant. “We could do both and see which looked batter.” Weta came up with interactive shading inside Maya that took advantage of the GPU power. “Using the same algorithm we were using for the dual scattering shader at render time, were able to have lighting with shading in real time, we were also able to control the representation of the width of hair. Usually every strand of fur is represented in the viewport as one pixel wide, but with our system we could use anti-aliasing to accurately visualize the hair width, showing the different widths at the root and the tip, so what you have in the viewport of Maya is very close to what you will have in the render.” Being able to see how the groom was working with the white fur inside the Maya viewport was great to judge how Snowy was looking without having to render every time.