About Chas Jarrett
Image: From flying cars to whomping willows, The Moving Picture Company produced over 250 shots for Harry Potter and the Chamber of Secrets.
Leonard Teo: You were the CG Supervisor for the film, what exactly does that mean?
Chas Jarrett: It means that I’m creatively and technically in charge of how we approach any shot that contains CG in the film. I’ve been on board from the very beginning of the project and was also part of the planning team for scheduling and allocating resources for the project. I’m very much part of the creative team that interacts with the Director, our clients and the visual effects crew for the production.
We go through a lot of meetings at the early stages to develop the look of things and discuss how things are going to move – the more creative side of it. Based on those meetings, come back and start assembling a team of people to start thinking about those issues and start dealing with them. Traditionally, I’ll pick a software lighting lead (TD), modeling lead, animation lead, then go through meetings figuring out how to do the shots.
It’s really overseeing any element of everything that gets done by the CG department. Anything that involves CG, I’m the first person that the client talks to within MPC – rendering, motion blur, etc. It also crosses over into 2D, I’m still responsible for CG when it’s composited. Generally there will be a 2D compositing supervisor and we’ll work together to find the best methods for CG to fit in the scene. It’s a pretty broad job description but basically it means that I’m in charge!
Leonard Teo: How many shots did MPC have and what sort of timeframe was involved?
Chas Jarrett: The crew topped at 70 people in total. We had 251 shots. Only 244 made it to the film as some were omitted for various reasons by the Director. I was working on Chamber of Secrets right from the start, when we had finished Harry Potter and the Sorcerer’s Stone. The timeframe ran from September 2001 to October 2002, over a year.
The beginning stages were tests to demonstrate that we could do the work and also demonstrate ideas on how we wanted things to look. We also spent about six months just developing custom tools for the effects.
Leonard Teo: Tell us about the Whomping Willow shots.
Chas Jarrett: One of the big scenes in terms of complexity was the Whomping Willow scene. It was complex because it was an entirely digital environment – the kids, the actors, the car, the tree – literally everything was CG in a handful of shots. The first thing that we identified as a problem was how to make the tree move and react naturally. We wanted to give animators control so that they could animate the basic shape of the tree, the limbs, the main branches and the trunk. But then we knew there was going to be tens of thousands of what we call “secondary branches” coming off the main limbs, and those secondary branches have more branches coming out of them, with leaves, etc. We needed to make all that move and sway and lag without having to be hand animated.
Julian Mann (Lead Technical Director, MPC), wrote a system called “Cantilever” inside Maya. This is essentially a dynamics solver made specifically for the tree. Effectively it handled all of the secondary branches. This was an extremely useful tool and helped us to automate a lot of the processes when animating the tree.
We also wrote tools for the leaves to separate from the tree. This was a completely new rigid-body dynamics system for Maya, which we used so that we could drop tens of thousands of leaves and twigs off the tree, bounce them on the car and then bounce on the grass. We found that with the Maya system, it was simply impossible as it was too slow to drop that much stuff and have it automatically react, so we wrote our own.
Leonard Teo: Why didn’t you use something that was off the shelf – say, Reactor or Napalm?
Chas Jarrett: Our general philosophy in tools and development is to keep it in Maya so that it’s easy for our Technical Directors to use. A job such as the Whomping Willow was very specific, so we knew that we could write it. The benefit of developing tools in-house is that we don’t need an API. It’s our code and we can change it anytime. We’re not trying to sell it and we don’t need to get support from other companies. We also don’t need to pay for more licenses or upgrades everytime we need to expand or improve. In the long run, it makes much more sense for us.
We had a lot of particle emitters that we wrote to simulate leaves as they fell from the tree such that it would realistically sway and spin to the ground. For example, the seed pods that drop off trees and spin like a helicopter as they descend. You can actually do this in Maya using Maya Cloth, but again that would be painfully complicated, and we were dropping tens of thousands of leaves every time the tree moved. It would float down, land on the car and bounce off. Again, we wrote our own particle fields so that we could model geometry and it would float down correctly.
We also wrote our own particle emitters and collision detection system so that we could emit particles when two objects collided based on the velocity and angle at the exact point where they collide. For example if you drop something on a branch, it will break a certain way. So when the tree is pounding on the car, there were particles being thrown in the air at the exact point of contact and this was all handled by a fairly automated system.
|<< Previous page (page 1 of 6)|
Leonard Teo: You mentioned that many of the Whomping Willow shots were completely CG. Can you tell us more about what was CG and what wasn’t?
Chas Jarrett: Basically, there was no real-life location. They built a 1/3 scale set on stage at Shepparton. This featured the top third of the tree that was full sized and it was forced-perspective so the rest of the tree looked to be over 100 feet high. They placed a real car on the branches of the tree as if the car was stuck in the tree. Around the perimeter of the stage they built a forced perspective third-scale set of the courtyard and laid real turf down on the ground. Whenever you see close-ups of Harry and Ron talking, that was all live action shot on stage. For many of those shots, we added 3D branches and leaves falling.
We built the entire courtyard and the tree in 3D. One of the shots has the camera looking at the trunk of the tree, then it tilts up to see the whole tree crash down towards the camera. The theory was that we would replace just the top of the tree. What ended up happening was that we replaced absolutely everything – the car, ground, sky and tree. This actually worked out quite good. Immediately after this shot, you see the car driving off with the tree crashing down after it. When the Director saw the shot, he thought that he had shot it on location and that we had put the tree in, but it was a completely digital shot! This was a nice compliment.
Some of the shots were set extensions, for example, the shots where you’re in the back of the car looking over the shoulders of the kids through the windscreen. When the car falls out of the tree, everything ouside the car including the windscreen is CG.
The actual animation of the tree was headed out by Jason McDonald (Lead CG Animator, MPC). We developed a rig for controlling the tree that was fairly simple – polygon mesh, sub-division surface at render time. We had a basic skeleton running through it and the rig was hand animated. To be honest, the tree looks like a guy in a suit when you look at it. It’s got a trunk and limbs, which is very well suited to be hand animated. The procedural animation system handled all the secondary animation such as the branches and leaves.
Leonard Teo: What was the hardest part about the Whomping Willow shots?
Chas Jarrett: The biggest challenge with the Whomping Willow was rendering it as there was just so much going on in the scene. The kids alone and the car were huge meshes with gigs of textures. Then there was the tree, the walls and the grass. It was simply massive.
We used PRMan 10 to render everything. At one point our renders were taking 4-5 hours per frame! Despite having a huge render farm, the scenes were simply too big and we were concerned that we wouldn’t be able to render on time. John Haddon, our Lead R&D at MPC, came across something that was absolute genius. John came up with the idea where we could use PRMan 10’s secondary outputs to give us surface normal data per pixel. While this data doesn’t look like anything to the human eye, you can feed it into a plug-in in Shake (compositing package). What it does is that it tells Shake which direction every single point of the visible object is facing. John then wrote a set of plug-ins in Shake called “Norman” which allows you to completely re-light objects in Shake. You can create lights and move them around exactly as you would in 3D, but the results would be in near real-time.
So we were rendering scenes with no lights and performing all our lighting in Shake. What it meant was that we could do drastic re-lighting completely in real-time. It takes all the displacement mapping into account because that’s all written into the surface normal information using secondary outputs. By the end of the job, we were rendering shots once and tweaking it all in post!
|<< Previous page (page 2 of 6)|
Leonard Teo: Tell us about the car…
Chas Jarrett: The bane of my life! It really was a pain because most of the shots had kids in it, and they were rather heavy in terms of complexity. We had all the normal issues of cloth simulation and hair simulation, which Maya cloth could be used for. We used the Cantilever system that we coded for hair dynamics, to get the hair to fly around. Finally, the hair was rendered in Jig.
We shot thousands of images of the car and captured a lot of video and film footage. We took it out on test day where we loaded the car on the back of a truck, drove it around and filmed it just to look at how the reflections moved over the glass and how the chrome responded. We had a scanning company digitize the entire car, which produced a completely unusable mesh, but it was a good basis for Ben Thompson (Lead Modeler) to construct the car using sub-division surfaces. The entire car from interior (even the keys in the ignition) to the exterior was modeled. Andy Middleton (Lead Texturer) spent two months painting the car inside and out. The car alone has 4 gigs of textures. Just the wing mirror can fill a 2K resolution cinema screen and it would hold up fine.
At the beginning of the movie, we wrote a procedural spring system that allowed you to move the car around and it would lag. For example, when you turn the car, it would swing outwards as if it was slightly out of control. This system was all real-time rather than dynamics and we called it “Lagger”. While the system worked fantastically, we dropped it in favour of hand animation as we preferred to let animators make up their own minds about what they wanted to do.
When we started rendering the car, we wrote our own shaders and that included image-based lighting. We were taking photographs on set of the location or photographing chrome balls to get 360 maps of the environment. We wrote shaders for lighting our scenes based on those images.
Leonard Teo: One of the digital car shots was “Post Hollow” where the kids have just escaped from the spiders, tell us about this sequence.
Chas Jarrett: The kids have just escaped from the spiders. The car flies over some trees at night, the camera cranes down, the car plummets into the ground and skids all the way up to the camera so that the door of the car completely fills the frame. While the car is still sliding, the door opens and the kids get out. In this sequence, the whole car is 3D until the door was open. When the door flings open, it transitions from 3D to live action on one frame seamlessly.
Tom Wood was the MPC Visual Effects Supervisor for the film. We shot two plates on location – one clean pass with no car and another with the car. The camera is looking up at the sky and it cranes down into a position where the car is directly in front of the lens. We timed it all out with no car and when the camera was in final position, we locked it off and rolled the real car into place so that the door completely filled the frame. We put the kids inside the car, started rolling the camera again, pulled the camera back, had the kids open the car door and climb out.
With these plates, we replaced all the ground (grass) so that when the 3D car hits the ground, it would fling mud around and trash the grass. Now, because the real car and the digital car door textures are different, we took the transition frame of the car door and we projected that onto our digital car. This texture was actually running footage in real-time, and we rotoscoped the CG door so that when the live action car door opens, the CG door opens and matches exactly.
The result was a completely seamless transition between the 3D car and the real car where the kids jump out. We were extremely happy with the results.
|<< Previous page (page 3 of 6)|
Leonard Teo: Can you tell us about the opening shot for the film?
Chas Jarrett: Apart from the Whomping Willow, the opening shot is the only fully CG environment, although we did a lot of CG extensions to other environments. In the opening shot, the camera pulls out of the clouds and you find yourself in the air, looking down on a suburban sprawl at twilight. The camera continues moving in until it is in Harry’s room. In this sequence, the entire suburban sprawl was a huge CG build. The far-distant sky and mountains were a matte painting, but from about 5 miles from camera, everything is CG and is tens of thousands of houses and buildings.
Our approach was to model, lay it out and place three cameras at various locations of the move. As we move through the shot, we would duplicate our camera so there were three sections to the shot. For each of the cameras we would render high-resolution stills (three stills) with very basic textures, for example, brick textures on the houses. These images were handed over to our matte painter who would paint in all the details. When the paintings came back, we would camera-map the image back onto our scene to have all the details in. It may sound simple but it was five months work for two people!
The matte paintings were actually quite flat in terms of lighting, as we lit the scene in 3D. Three matte paintings were needed because of the way the camera moves – for example, the first part of the shot would contain scenery that you don’t see in the second or the third.
To fill in the scene, we took our employees out the back of the building, had them walk around and shot them on DV camera. These are the people walking around in the streets! We also produced the CG cars driving around. Notably, we didn’t add any flocks of birds in our shots (laughs).
In the final portion of the shot, the camera comes down to a row of houses before closing in on Harry’s house. This is a fifth-scale miniature. The entire move was developed from scratch in CG (pre-visualization). For the live action shoot of the miniature on green screen, we had a motion-controlled camera shoot the scene so that the move would match with our CG move. Once we received the plates, we had to match the lighting that was used in the live action shoot. Harry’s room is a full scale set, and we had to subsequently track this into the shot of the fifth-scale miniature.
|<< Previous page (page 4 of 6)|
Leonard Teo: Tell us about the snake animations at the Duelling Club.
Chas Jarrett: We produced twelve shots of the snake at the Duelling Club. In these shots, the snake was completely digital. We did have a rubber snake on set, but that was only ever intended to be a stand-in for lighting reference.
The room was packed with kids and they were rather unruly, especially as they were excited about being on a Harry Potter movie. We used laser pointers to direct their eyes, so that they would be looking at the right place while shooting. To be honest though, when the snake was reared up so that we’re looking above the snake’s shoulder at kids, none of the kids were looking at the same thing. So we had to replace the eyes in post and in some instances, entire heads! Because of this, we had to retime a lot of shots, for example when the snake is flung into the air and all the kids are meant to follow the movement of the snake, they’re all looking in different directions!
Image: While the snake animation was fairly straightforward, an issue arose where the unruly kids seemed to perpetually be looking in the wrong direction!
We attached three people to this sequence for a period of six months. Tony Thorne was the lead animator, Matt Hicks (TD) rigged, lit and match-moved the scene. Jessica Norman was the compositor for the sequence and she comped all the snake shots. Tony storyboarded the sequence – how it moved, the composition, etc. This shows the level of creative input that we had in the movie as we were able to be part of the creative process in planning the shots. [CGN|3DF]
From the Editor, Leonard Teo
Words: Leonard Teo
|<< Previous page (page 5 of 6)|