|With the LightStage 2 system, Routh sat in a chair while a rotating semicircular arm with 30 Xenon strobe lights swung around him every eight seconds. As the arm moved and the lights strobed, six synchronized Arriflex movie cameras positioned around Routh’s head - two more than for Molina - shot footage at 60 simultaneous frames per second. Because 60 fps doesn’t quite stop motion, the crew bolted the cameras down and braced the actor’s head and neck. “It’s difficult for [people being captured] to hold still, but the more they are still, the sharper the capture,” says Hoover.|
By blending all the images taken by the six cameras at one moment in time, the crew created one texture map to wrap around a 3D model of the actor’s head. “I hesitate to call it imagery, although it is an image,” says John Monos, CG supervisor. “We have algorithms that extract the reflective data stored in the captured images, so the map represents the reflectance of the skin.”
The process produced 480 reflectance map images, 70 Gigabytes of textures, that wrapped the model according to the lighting in a shot. When lighting technical directors (TDs) positioned lights in a scene, a system developed by Imageworks automatically brought in the map with the matching lighting. In other words, put simply, if a TD shined a light on the left side of Superman’s face, the system found a blended map of images shot by the six cameras when the strobe lights pointed at the left side of Routh’s face.
Because the man of steel didn’t have a frozen face, Imageworks developed algorithms to manage the reflectance data as the face moved - as animators changed digital Superman’s expression. Animators couldn’t work interactively with the photoreal faces. Instead, they had simple shaded models in Maya. But, an Image Based Rendering (IBR) tool provided feedback.“
Part of the difficulty in using IBR is having coverage,” says Monos. “If Routh’s eyes were looking one way, but the animators moved them the other way, it might reveal a portion of the eye that didn’t have good reflectance data. So, we used both IBR and traditional rendering.”
In addition to photographing Routh’s face, the crew motion-captured his body and facial expressions to help animators create the double’s performance. Although Imageworks captured basic body positions - running, walking, arms stretched forward, and flying - they concentrated more intensely on facial capture for close ups of Superman’s head and shoulders. Digital Superman doesn’t talk in the film, but he could have. For facial capture, the team used techniques developed by Imageworks for 'The Polar Express' and 'Monster House'. For scenes of Superman flying, Routh “flew” on wires rigged on a 100-foot long greenscreen stage. Often, though Imageworks’ digital Superman replaced the greenscreen footage in final shots - sometimes completely, sometimes partially. Even so, the greenscreen footage provided reference. Jones also turned to Alex Ross drawings for inspiration. “Ross does comic book poses, but his drawings are from real life,” Jones says. “It was definitely a challenge working out what Superman’s flying pose would be, what happened to his body when he flew. He’s a man of steel, but if he looked too stiff, he’d look CG.”
Although Superman was sometimes real and sometimes digital when he flew, his cape was usually digital. “They couldn’t get the wind right in the greenscreen room,” says Jones. “If it blew too hard, Routh would squint and get bloodshot eyes.” Moreover, the digital cape could be art directed - even when Superman flew at 1,200 mph. For cloth simulation, the studio used Syflex software. To art direct the cape, animators blocked out poses that became targets for the simulation, and a team of technical directors led by Takashi Kuribayaski developed a workflow to sculpt the physics. For hair simulation - Superman’s and digital Lois Lane’s hair - the crew used in-house tools.
In addition to creating digital doubles, Imageworks gave Superman a digital city to fly through, the Daily Planet office building for Clark Kent and Lois Lane to work in, and, in the dramatic opening sequence, a shuttle, an burning airplane, and a baseball stadium. For aerial shots of Metropolis, the crew mapped photography onto geometry, but they also built a digital city within the city for close-up shots. “We created a city grid around the Daily Planet that’s one hundred percent digital,” says Hoover. “The ground, the cars, everything. Then we situated that into Manhattan as we know it and made all the streets work.” For the digital city, the modelers revamped some buildings from 'SpiderMan 2', but they modeled the Daily Planet from scratch, matching and greatly extending a two-story set and also a rooftop set.
“The Daily Planet is the largest building we have ever built here,” says Bruno Vilela, CG supervisor. “It’s 908 feet tall with a 30-foot globe on top, and it isn’t symmetric. We couldn’t build one façade and then replicate it.”
Textures painted in Photoshop and Body Paint added details to the complex geometry; the building has no displacement maps. As with 'SpiderMan 2', a proprietary rendering interface, BIRPS, provided artists with a way to handle the massive amount of data and move it with assigned shaders and lights through RenderMan.
Modelers also customized the surrounding areas to create a city that resembled New York, but was not New York by removing such landmarks as the Statue of Liberty and the Chrysler Building and changing the bridges. And then, having built Metropolis and the Daily Planet, Imageworks destroyed it. “I think that the destruction of the Daily Planet pushed the edge of what we’re doing here in terms of simulation and compositing,” says Vilela.
In one shot, the globe on top of the building rolls off its base and crashes against the building as water flows down the building. Imageworks created the entirely digital shot with an assist from Tweak Films, which simulated the water.
To destroy solid material, the Imageworks team used Maya and rendered the elements with RenderMan. For smoke and dust, they used a hybrid pipeline that moved data from Maya through Houdini into RenderMan or into Imageworks’ own Splat renderer. The latter pipeline also wrangled smoke and fire for the shuttle destruction scene.