Monsters v Aliens

  • {digg}
    {stumble}

    Making S-3D Gel at DreamWorks for the big comedic invasion flick of the year.

    CGSociety :: Production Focus
    31 March 2009, by Renee Dunlop

    Start with a giant scoop of translucent gelatinous matter and animate it as a character. Train him to metamorphose as needed for script or gag. Light him in a way that defines his specular surface and pellucid interior believably without losing the volume of his ever changing shapes. Make him a key character and add Stereoscopic 3D. Whip ingredients together for just under two years.


    “I was the primary rigger in the character TD department that worked on B.O.B.; pretty much the only thing I did on MvA for almost two years. It was 94 weeks."
    Terran Boylan, Character TD.


    Any of these feats would be a task for even the most seasoned crew, but DreamWorks decided to tackle them simultaneously on their first fully Stereoscopic 3D (S-3D) film, Monsters vs. Aliens. B.O.B., short for Benzoate-Ostylezene-Bicarbonate, was such a challenge that the DreamWorks team implemented a task force just to bring to B.O.B. to the small, big, and even bigger screen.


    Slowly building B.O.B. © DreamWorks Animation
    B.O.B. ponders the possiblilites. © DreamWorks Animation
    B.O.B. poses. © DreamWorks Animation
    © DreamWorks Animation
     
    © DreamWorks Animation
    DreamWorks doesn’t use camera rigs that tow in or the resulting converging axis, so tend to avoid the term convergence. A third trick is the stereoscopic window, the black frame around an image. “Traditionally, the screen and the stereo window are on the same plane, but we use something called the floating stereo window, which is an optical mask that makes the frame appear to float at some distance in front of or behind the screen. We can tilt it, angle it, and animate it through a shot. What ends up happening is the frame of the image becomes part of the depth composition by appearing as if the screen is closer or further away.”

    By using a tool they call a yardstick, McNally’s team was able to determine the stereo shift, or how far something is from the camera to determine how an image will look in a theater format. “We also have something we’ve developed called the Stereo Volume Measurement which will tell you how round or flat a character is at any particular space. We want to have characters look consistent from shot to shot, and don’t want them to feel flat like a cardboard cutout,” a consequence of changing lenses from shot to shot. “Switching from a 24 millimeter lens to a 35 millimeter can actually squash a character by 50% percent. It’s pretty aggressive in terms of how much distortion in depth you get from going from a wider lens to a more normal lens. A zoom lens gets incredible flattening. We don’t try to avoid long lenses, but have tools built in that allow us to put volume back into the character without setting the depth so deep that your eyes hurt.”

    S-3D
    Starting from zero with no S-3D tools in place, DreamWorks first needed to set up a pipeline and train a staff in the nuances of the third dimension. Stereoscopic Supervisor Phil “Captain 3D” McNally led the charge into the depths. “All stereo settings boil down to two things. One, how far you separate the two cameras from each other- the distance between the two axes of the lenses, called the interaxial or interoccular distance. As that gets wider, we are adding more stereo volume into the space. The second parameter is how much volume, depth, you put in and where in that depth are you going to place the screen position, what DreamWorks calls the ZPS, (Zero Parallax Setting). That will determine how much something is playing towards us and how much it’s playing away from us.”

    Stereoscopic Supervisor Phil 'Captain 3D' McNally.

    Concept art for the loveable B.O.B. © DreamWorks Animation
      Go to page 2
  •  
    Character Tech. Director, Terran Boylan

    A far and near based setup simplified the artist's job of determining and correcting the stereoscopic shift. Trying to calculate without those controls meant every time a lens was changed, the distance of the lens to the nearest object would require the calculation of a new interaxial distance. “You might have a number which is 1.0582, and you want to give it 5% more stereo volume, but the numbers are so complicated that it’s hard to get a language of what you want and how to get it. When we use the tools we have now, we are talking a simple language of pixel shifts. I’ll give notes like ‘near plane should be set on Dr. Cockroach at negative 20 pixels and make the back of the room at plus 10’. If we decide to make the room look deeper, I can say ‘OK, let’s make the background plus 15’.” It’s a surprisingly sensitive process. “If we put in a sky element and it looks a little two close to some buildings, we’ve done as little as shift that background by half a pixel in one of the eyes and be able to drop that back in space just enough so it feels more natural.”

    The B.O.B. Blob
    Five different departments worked on the animation and look of B.O.B.. He was created and rendered entirely in proprietary software, using a new technology. The first decision was whether or not B.O.B. should be created through FX or as a character. In the earlier versions he was much more blob-like, more of a traditionally rendered isosurface, but within a few months it became clear that he couldn’t look like an effect in a world populated by the usual characters. Normally that would mean the design would be entirely an FX task instead of animation, but B.O.B. had to be treated as a main character and not as an effect, while still having character FX handling much of his technology.

    © DreamWorks Animation
     

    Rigging and FX worked together to set up a system that actually used two different versions of B.O.B.. A version that was a surfaced based was made of NURBS (Non-uniform Rational B-splines) patches, similar to any other character. That version had facial features but didn’t have any arms, and could be deformed.

    The other version was particle based and came out of blob tessellation, a topology that can change every frame. Character TD Terran Boylan explains. “You can think of it as random triangles, similar to blob effects or isosurface effects. It’s a warped version of that mesh that gets rendered. The bottom line is that, while we used a surface based model as part of creating B.O.B.’s model, the final model that was rendering was always a high resolution polygonal based model. They were parallel versions of the same character, but one was strictly based on blobs and the other was just a surface version of his character. Those two were combined and the result was a polygonal mesh. If B.O.B.’s arms are turned on, all the polygons that are associated with his arms are coming from the blob version of him, and the part of his anatomy that has his face and torso is informed by the surface version. In between, they are blended together,” yet was animator friendly and recorded the animation performance. The animators saw a version of B.O.B. with simpler controls allowing for more focus on the performance. This also gave the ability for character TD to add deformations for gags like objects passing through his body that reacted to the passage. The character TD would work with the version that worked best for each scene.

    Boylan worked on B.O.B. exclusively for 94 weeks to bring the gelatinoid character to life. “We had a weekly, then bi-weekly task force that included a couple people from character TD, FX, character FX, lighting, R&D and production engineering, and me. There was a lot of R&D development as a result of B.O.B.. We had a system that was called Pa_Blobs (Particle Blobs). Generally, the way blobs work is, a set of ellipsoidal particles define a shape and create an isosurface, a blending together of those ellipsoid particles. Our Pa_Blob system was set up to work with spherical or ellipsoidal blobs, but Jules Bloomenthal & Deepak Tolani in R&D added an additional curve based blob which was perfect for B.O.B.’s arms and fingers. That was a big technology change that allowed us to do a lot of shapes.”

    © DreamWorks Animation
     
    © DreamWorks Animation
    © DreamWorks Animation
    © DreamWorks Animation
     
    © DreamWorks Animation

    The result was B.O.B.s animation controls looked more like a Slinky toy, a cylindrical shape that could be posed. “One thing that was tricky,” said Boylan “was making sure all the built in gags played well with each other. There were definitely some constraints, but we didn’t want to restrict the animators too much.” A simplified rig was given to animation with controls that were reasonably familiar, basically a tube with arms that could turn on and off. One plate was included that could be placed at the proper tearing point separation was needed for scenes where B.O.B. tries to pass through something and breaks in two, connecting again in a later scene. Any more pieces had to be done in Character FX.

    Previous pageGo to page 2
  • © DreamWorks Animation
     
    © DreamWorks Animation

    Character FX
    Various gags included B.O.B.’s interaction with the environment. A special deformer called the ground deformer, written by Scott Cegielski in FX, showed the contact between B.O.B. and when he’s sitting on the ground. Different parts of his face, which at times occupied two-thirds of his body, could blend on and off, disappearing into the blob surface then reemerging. His head could detach, his eye could float and move separately from other parts of his face, and could come out of his body. “He was transparent,” Boylan added, “which means that all of his internal anatomy had to be visible. He had a mouth interior that needed to be visible without drawing unnecessary attention, and it had to deform properly. This affected the character rigging because traditionally, characters are built with layers of skin, starting with the skeleton to muscle structure to the surface. All of B.O.B.’s deformations had to be completely volumetric. That was something we didn’t realize until we started to have problems with the interior of his mouth poking through the back of his head, etc. We had to fall back and rethink our approach on deforming him.”

    One of the big gags throughout the film was B.O.B.’s arms turning on and off, appearing from his body. B.O.B.’s arms appeared when he needed to gesture then they disappear back into the surface of his body. The rigging allowed for the arms to be detached from the body if needed. “Another thing that was very helpful was per vertex attributes which allowed us to embed a lot of information into B.O.B.’s model. If you were to look at B.O.B.’s polygonal model, there is a vertex attribute called “arm weight” that determines how much of that vertex is his arm. Parts on his torso would have a ‘0’ value, and parts on his arm would have a ‘1’, and vertices midway would have a value in-between. I was able to do that as part of the rig, then that information, the surface texture and surface ID information was picked up later on by Lighting for doing motion blur or other things. That was crucial. There were two things that were tricky with B.O.B. that made him different. One was generating proper surface normals and the other was generating motion blur.”

    © DreamWorks Animation
     
    © DreamWorks Animation

    Interior bubbles gave B.O.B. depth and volume inside. “He’s made up of a particle system of 12,000 volumetric bubbles that were part of the character rig, the most expedient place to put it, and it helped to deform the bubbles along with everything properly.” They added a simulation with a little spring motion in the bubbles based on dynamics in the characters setup as well. “If the motion was extreme enough to have the bubbles drift out of his body, they wouldn’t be counted and rendered. There was nothing for them to collide inside of him. It made sense with him. He’s not supposed to be made out of water and plastic bubbles inside; he’s supposed to be a viscous fluid where the bubbles are part of the volume.”

    The meshes ended up being quite dense, hundreds of thousands of polygons, presenting yet another challenge, namely performance. Boylan ended up rewriting quite a bit of code in C++ to optimize performance. “Normally we work with a very C-like scripting language that is interpreted, but B.O.B. was a battle on two fronts; one was artistically achieving the desired effect, and the other was making sure his performance was sufficiently fast for our production.”

    Two Sides to B.O.B.

    It’s important that all the stereo is locked down before a scene goes to Lighting and Rendering, which was particularly challenging because B.O.B. was mostly reflective and refractive. Normally surfaces in a movie are predominantly diffused but in B.O.B.’s case it was tougher to define his shape. Rim lights normally used to separate something from the background were ineffective since the light wouldn’t land the way you expected it to. Digital Supervisor Mahesh Ramasubramanian described the solution, “In B.O.B.’s case, we needed so much control and fine tuning, we ended up putting him together more in compositing. He was broken down into multiple layers of reflections and refractions and rim lighting and normal passes.”


    This meant getting the surface normals to work so the specular highlights would work in lighting. When you have something that is highly reflective and rendered in S-3D, the left and right eye highlight location could be very different, causing two highlights instead of one. Though this occurs in nature, in real life our perception tends to be more forgiving. “We don’t tend to focus on those reflections and identify what is seen in the reflections,” explained Ramasubramanian, “but in our case, B.O.B. is big main character and you are supposed to look at him. You are trying to make sense of what’s being reflected on him. Even though it’s realistic to have the left and right eye different, we couldn’t let that happen, we had to have more control over the reflections. Also, if the surface normals changed quickly around high frequency areas such as near his eyes or the folds on the bottoms of his feet, light would reflect off those surfaces.”

    The first step in solving this problem was to render the highlights for both eyes from the same camera position. “The reflection would look like it was on the surface, like a texture map, but they needed to have some depth to them. What we did is use the interoccular distance to bring the two eyes closer together only for the reflection and refraction passes, with mostly no difference in the left and right.”

    When it Gels
    DreamWorks will be releasing Monsters vs. Aliens on every S-3D system that is available, with multiple deliveries for the different formats, taking into account the eyes’ physical abilities to take in certain amounts of depth. “What we are designing for,” says McNally “is screen size rather than the individual system. As the screen gets bigger, the scale of the depth increases. You can look at something on a monitor and feel like its very flat or shallow then you see it on the IMAX and it’s incredibly deep because the scale has gone up with the screen size.

    “We are projecting on screens from 35, 40 feet to the biggest IMAX, over 100 feet. It’s important, when you are making Stereo movies, that you test on those public screens and not set your depth on a small monitor.”

    Fitting for a character that went from a small project to one that took a task force and a dedicated Character TD 94 weeks to complete!


    Digital Supervisor, Mahesh Ramasubramanian
     
    Related Links
    Monsters vs. Aliens
    Phil “Captain 3D” McNally
    Mahesh Ramasubramanian
    Terran Boylan

    Discuss this article on CGTalk



    Previous pageMore Articles