• CGSociety :: Production Focus

    25 June 2012, by Paul Hellard


    Everybody is familiar, I think, with the fable of Snow White and The Seven Dwarves. The Wicked Witch, the apple, the prince. When the fairy tales becomes modified to bring in morphing mirrors, trolls and the creepiest black forest I've ever seen, a huge amount of VFX talent is called upon. Turning to The Mill, called to arms for the work on Universal’s Snow White and the Huntsman, CGSociety talked to the VFX Supervisor Nico Hernandez.

    The Mill's forty shots spanned over two major sequences for the The Mill. These were pretty much full screen, ‘nowhere to hide’ kind of shots as well. The birth of the Mirror Man is where the Charlize Theron’s character Queen Ravenna is talking to the Mirror and the fluid-effects begin. The Mirror Man is a full CG character but the sequences are dialogues with Queen Ravenna. Because the sequences are not action sequences and there are multiple close ups, there is nowhere for the CG to hide. The liquid oozes down the stairs, lifts itself up and forms into the golden statue that begins to talk to her. The crew started with a live action physical element shoot -- upending buckets of paint on dummies at 1,000 fps to see what happens in the real world. They devised custom software and solutions to control cloth and forces which involved multiple tests with fluid and cloth. As the effect evolved it became clear that cloth simulation was the look that was going to work the best.

    “We were approached by Overall VFX Supervisor Cedric Nicholas-Troyan and Producer Linda Thompson from Universal Pictures at the beginning, wanting us to work on the Mirror Man sequence,” said Nicolas Hernandez. “They wanted something elegant that would work as a main character and stand up to being full in the screen.” The Mill team started to do a lot of technical tests, to fabricate the liquid look on the mirror. The Mill crew organized a studio shoot to film a range of liquids and fluids in slow motion to emulate the glass of the mirror as it oozes out of the frame.

    A lot of practical effects were used, just for the R&D, in an effort to have the film greenlit. “There was a lot of thinking outside of the square, because we really were looking for some effect that looked like something that was almost impossible,” says Henandez. The director Rupert Sanders and overall VFX Supervisor Nicolas-Troyan used a small tin of black paint, and filmed it in slow motion, to get the same effect.

    personal personal personal personal personal personal personal personal personal

    “It had to stand up doing almost nothing and fill the screen, so we knew a high level of polish was going to be needed as well as subtle, experimental and detailed animation for his performance. His animation drove the simulations on top of him so there was a lot of trial and error. The Mirror Man is also as much about Theron’s performance being reflected in him as anything else, and working on the look and integrating her reflection was a key part of the work. We started with Maya and Houdini as well as Softimage as a small team of five people in at The Mill. Most packages have certain kinds of tools, in the box, ready, and we experimented a lot,” says Hernandez. “In Maya, we focused on nParticles and nCloth with a combination of thread and cloth simulation, and in Houdini we did a lot of particles and meshing and soft body tests. In Softimage we used ICE to combine Syflex and particles,” he says. “We were all over testing tools in packages, and then every shot was composited with NUKE.”

    The Snow White project was a ten month job for The Mill and another five months was just for R&D on the Mirror Man alone. There were many props for the shoot that had to be specially designed and constructed. One particular construction was required to collect the full reflection of the room surrounds. “This prop needed to be the same height as the mirror, with a RED camera hidden inside it. This was so we had the scene captured from the Mirror’s point of view. A face-on view of Charize Theron as she delivered her lines and it had to be very clear, no deformation,” explains Hernandez. “We ended up building the entire set in 3D for the room’s environment. Instead of matting out crew in a reflection, or just using what we had, we created it so it could be art directed and made exactly how we wanted it to be.”

    “We then used that footage, fully reversed, colored and finalized for what the Wicked Witch looked like from the mirror’s position. And this also allowed us to then deform the image and light the golden mirror as it appears in the room as a shiny golden statue,” adds Nico.

    personal personal


    There was also a wide ranging technical preViz for the mirror itself, to decide on the amount of folding and the speed of the melt. "Thankfully," says Hernandez, "there was a call for static camera, which helped for tracking this orb of molten mirror, but we wanted to be able to effect some emotion into him, even though he was fully CG, very subtle movements, speaking to an on-set actor, close up." There was 20 shots over two sequences, one being the birth of the Mirror Man. The team consisted at any one time, of five people, working on this feature. Basically one animator and Nico supervising and doing some of the TD work. "It was a long time for these shots, but a really tight, small team," Hernandez confirms. "At The Mill, we want to have small teams because it gives artists ownership of the effects at a personal level. We didn't want to be too big, but we want to dig right in. It really was a dream project for us."

  •  


    OptiTrack

    Nvizage was called upon with the OptiTrack Insight VCS and the crew found themselves working on over 320 previs shots for Snow White and the Huntsman,

    “Working at Pinewood with the VFX Supervisor Cedric Nicholas-Troyan, the Virtual Camera System was required for the many wide shot and mid and close up action and fight sequences. Whatever film we work on, our objective is the same,” said Nvizage co-founder Martin Chamney. “We want the audiences to feel like they are in the middle of the action. The Insight VCS helps us find a realistic perspective in real-time, so our scenes can quickly jump from interesting to immersive.”

    The OptiTrack system is an optical motion-capture system which has cameras in the system that tracks rigid or soft bodies. So it can capture camera movement and position at the same time as capturing motion by rigid bodies on set.

    The system was also used to create exterior environment previz. A castle, forest, beach and bridge set-up was created, using GoogleMaps imagery for textures, giving the crew a good idea of what the desired location was going to be like. "John Letano from Flying Pictures, who specialises in aerial cinematography, was able to come in and use or system to familiarise himself on what he needed to be shooting in the future," explains Chamney. "We ended up having the OptiTrack system used in six sequences and 320 shots."

    personal personal personal personal personal personal personal personal


    By using motion capture to track the camera’s physical position, Nvizage and key crew members like DP Greig Fraser could visualize how live action and CG elements interacted with each other in real time. This allowed the team to flesh out shot and lens ideas before principal production set anything in stone. As the scope of the shot assignments changed from extreme wide shots of battlefields to close-ups of pivotal character transformations, the Insight VCS’ ability to validate production strategies became increasingly more valuable. For instance, when trying to evoke the cavernous look established by the concept art for Ravenna’s Throne Room, Fraser used the VCS’ real-time display of the CG environment to assess how different lenses would paint the room. He found his match in a wide angle.

     

    This text will be replaced


     

    personal personal personal personal personal personal personal personal personal personal personal personal


    Since the system’s motion capture data can be mapped to any type of CG camera in Autodesk MotionBuilder, achieving the look of a handheld, car or heli-mounted unit was easily attainable. The VCS’ software plugin and mappable controllers also allowed them to easily adjust camera speed while they were in their virtual environment, as well as automatically smooth out any accidental bumps and jolts that came with the rough terrain. Streamed shots were ported into MotionBuilder, facilitating instant takes that could be screened or carried on into Maya for further animation and rendering.

    During the film’s Troll Bridge sequence, Nvizage added OptiTrack’s body mocap system to the mix to record performance footage of VFX Supervisor Cedric Nicholas-Troyan. Lurching about with arm extenders, his 'soft-body' could then be scaled up into the 20-foot troll for the film. With this system not only can you scale up any character that has been moCapped, but also any environment. Incorporated into the sequence as a supplement to the production’s pre-existing animation, his troll-like gestures provided the realistic movement the scene had been lacking. Other collateral gained from the shoot like step diagrams, hand impact points, and spatial measurements that revealed the distance between camera and actor could be used by the VFX team as detailed reference pieces for their design work.

    New Tech

    nCam is a real-time camera tracking technology which nvizage showed at FMX in May. All those previz assets that have been created using Maya and MotionBuilder, can now be repurposed, live at the time of shooting. nCam's advantage over encoded camera tracking systems is that it's capable of tracking any kind of camera setup, whether it's on a crane, dolly, hand-held, and it works on stage or outside on location. The position and orientation of the virtual camera is driven in real-time by the movement of the on-set camera. 

    Virtual backgrounds or virtual foreground objects can be added into plates live at the time of shooting with nCam.  Digitally animated assets can be streamed in to nCam from software such as Motion Builder. nCam has support for digital video I/O,  green or blue screen images can be keyed out by nCam's internal keyer, and replaced with virtual assets, providing real-time augmented reality.  Right now, nCam is in beta stages of testing.



     

     


blog comments powered by Disqus