• CGSociety :: Production Focus

    16 November 2011, by Renee Dunlop



    Real Steel is more than an exciting and heartwarming film amidst cheering crowds and clashing metal fists. The real stars and the gears that brought Real Steel to the big screen were the brilliant artists at Digital Domain. Breaking ground as well as breaking body armor, Digital Domain shares some of the details of their latest production.


     

    VIRTUAL PRODUCTION

    Step one was Performance Capture (PC) with a focus on capturing the motion exactly as director Shawn Levy wanted it without the distractions over concerns for lighting, camera angles, or frame compositions. Every round of every fight was choreographed and performed in a motion capture volume at Giant Studios.


    At the same time, data from the real-world principal photography locations were gathered in order to build an exact duplicate in the form of a virtual set. Under the supervision of Digital Effects Supervisor Swen Gillberg a small team was sent to Detroit where Real Steel was to be filmed so they could survey and photograph the locations. Using the collected data, Digital Domain (DD) created lightweight photographic versions of the locations in Maya. Giant Studios, where the mocap was performed, turned those files into Motion Builder environments. Those digital environments would allow the camera capture step to work within a simulated version of the location where the principal photography would eventually be shot.


    Next the mocap and virtual environments gathered in Detroit were combined. Using a virtual camera, Levy shot a temporary version of the fight scenes to determine camera angles and framing as the camera capture process followed the mocap’d action. The approved shots were edited together creating a previs version of the scenes that became locked edits of the fight sequences. The result was fully prepared Real Time Visualization (RTV) ready for principle photography using the SimulCam system. “We would build realistic edits that eventually ended up, often shot for shot, in the movie,” said Gillberg.



    personal personal

    Using the SimulCam on location in Detroit, the DP and camera operator could “see” the virtual robots in combat in the (empty) live action boxing ring using what Gillberg described as “a fancy video game controller with a TV screen on it”. By placing mocap reflectors on the camera, Giant was able to record the camera’s position, run a realtime playback of the motion capture into the camera feeds allowing the cameraman and director to see a proxy version of the CG characters, i.e. the boxing robots, in realtime through the eyepiece of the camera. This allowed for more organic and visceral camera moves and generally faster production turnaround. “Sometimes we were doing five or six setups in an hour, which is ridiculously fast’” said Gillberg. “By working this way there was no need for post vis and edits were turned over early.”  Viewing the virtual fighting robots performing what was pre-approved during the performance capture process with composition defined in the camera capture phase, they were able to shoot the live action fans cheering a boxing match that could really only be seen through the SimulCam.


    “They would line up, we would hit play on the motion capture, it would play back and we’d shoot the plate with Hugh Jackman in the background cheering or controlling his robots or whatever,” said Gillberg. “Not only did this provide a speedier shoot schedule, but we got a fantastic temp shot that includes CG representations of the robots, which can then drop right into the edit for reviews.”



    personal personal

    The many advantages of working this way are impressive. The plates were far more organic than what are often captured using more traditional processes. Daily shoot schedules were streamlined to a nearly flawless list. With so much defined during the early stages nearly every following step was applied to what was seen on the screen. Takes were approved and plates were turned over to DD in record time. Animation, lighting, and compositing could begin earlier.


    “We could go straight from ingestion – without any humans touching it – past tracking, past layout past animation straight into lighting,” explained Gllberg. “It would publish the mocap onto our high res robots to send the live rigs over to lighting, but it would also kick off a V-Ray lighting pass without anyone touching it – just a very automated ingestion process. It would kick off a V-Ray lighting pass with the camera and the mocap on the high res rigs, spit out a really rough pass of lighting, comp it into the plate, then shoot it over to our viewing station for dailies. So before anyone touched it there was a V-Ray render of the robot in the plate. Not that this was finaled by any means, but talk about a great temp. That is pretty much virtual production in a nut shell.”



    personal personal


  • title

    IMAGE-BASED CAPTURE

    Digital Domain used Image-Based Capture (IBC) when the robots were interacting with Max (Dakota Goyo) or Charlie (Hugh Jackman) for actions such as a robot picking Max up or to match proper eyelines. A good example of this was in an early sequence when Ambush, the blue ’bot who fights a bull, came out of a cluttered truck. A stunt actor on stilts wearing a mocap suit performed on behalf of Ambush. Giant Studios set up small mocap volumes, triangulated all five of the image-based cameras, then tracked them to a 3D solve on the stunt actor. “In hindsight the IBC was a little problematic because the eventual motion that we got out of it often looked awkward, because their balance was slightly off. So the fidelity of the capture wasn’t important because we had to do so much keyframing on top of it. The cleanup of the guy in a mocap suit can be extremely time consuming.”  

     


     

     

     

    SWEN’S KIDS

    There were never more than 500 extras on a day but some arena shots needed as many as 20,000 spectators. A clever crowd system of Gillberg’s was the solution, digitally created the boxing crowd using a process that became known as Swen’s Kids. “I’m not sure how it eventually ended up being Swen’s Kids, but it did. It was a photographic setup rather than a traditional Massive setup. Only the people on the floor in the arena shots are real, everything above that is synthetic. The stadiums were set extensions, and all the people in the stands were part of those set extensions.” The only real people are on the floor, everyone in the stands beyond the floor used the card system to replicate the boxing event audiences in the thousands.  

     


    personal personal


    To create the large audience Gillberg shot roughly 80 extras with three EX3 cameras in HD portrait mode, one set at eye-level, one from medium-high and one from very high, “individual people, so one person per card. We had them run through 15 seconds of sitting, then 15 seconds of clapping, then a stand into a cheer, and 15 seconds of idle.” A second round was shot at three-quarter front, “three-quarter profile, three-quarter back, back, and then put the person up on a 10-foot high rostrum platform that had a level camera, a low camera, and a very low camera. Just by running through that scenario, three cameras, one minute per orientation, five orientations, ended up being 30 minutes per person.” This efficiently covered every camera angle needed in the stadium. 


    The shots were projected on tiles and using “some very fancy gizmos in Nuke” that would decide the camera’s frustum. “We’d put a CV at the bottom of every seat. Nuke would slap a card on every CV the camera saw, face the card to camera, figure out which card it wanted to grab; if the camera was high in the stadium looking down, it would grab cards that looked at the back of peoples heads.” If the camera was looking across the stadium it would see people facing it, each behaving with the appropriate action for the scene. “That way we could render a 20,000 person stadium” and view them en masse, or because the shots were in high definition, zoom all the way in to roughly 12 people across. As the camera drifts across the stadium “the cards parallax against each other so you get away with quite a bit of camera movement even though they are 2-demensional cards.” Which of the four recorded emotions that played relied on a time-slider instead of a computer, adding a natural feel to the crowds. 


     

    personal personal

     

     

    LIGHTING

    According to CG Supervisor Paul George Palop, the key to lighting Real Steel was covering “every set and every lighting setup incredibly well with a lot of photographs, HDR and survey data. We could virtually reconstruct any setup at any moment through what we called the light kit.” About 80% of the effects work was done in Maya, a decision made early on so the lighting and effects teams were on the same package, taking advantage of the particle and fluids systems for dust and debris. (Houdini was used for the more complex effects such as hydraulic fluid simulations that spew out during the fight scenes.)  

     


    The light kit consisted of the gathering of all the information gathered on set and producing or creating the virtual environment that could be brought into Maya. It contained an HDR probe where the brightest components in the HDR had been extracted and placed in 3D space. “We knew exactly where those lights were in world space so we could map them back on to geometry and have a much more accurate environment for lighting,” said Palop. Using V-Ray to render the robots and accommodate the extensive raytracing, the lighting accuracy got them “that much closer to photorealism.” Due to the use of HDR, lighters were able to get 80% there in the first few iterations, then just find tune the rest. 



    personal personal

    Several departments were involved with creating the lighting kit. “We started with integration, taking all the photography and measurements. That would go through compositing. Compositing would graph all the photography and data and location data from the surveys and create a scene file in Nuke.” The projections, extractions, and HDR were stitched in, and color correction of all the photography was done to make sure everything was balanced and corrected to a neutral grade that would be used during lighting. 


    Legacy built three robots, Noisy Boy, Ambush and Atom. The robots were between eight and nine feet tall, very heavy, and were used throughout the movie. DD had to build digital versions that matched those robots perfectly. Photographing the ‘bots for reference was an arduous process that took several days, using polarized lights so DD could remove the highlights and extract the required texture information. DD color corrected the photos before passing them to the texture painters who added detail and laid out the UV’s, working with hundreds of separate texture maps. There were 17 robots in total; eight were hero robots and nine were background. “These robots were fairly complex so it was quite a task to get the textures done for just one robot,” explained George. 


    One can only imagine.

     

     

     


blog comments powered by Disqus