”We had a Homeland Security officer in the helicopter with us ... he was worried that the wind would push us into the no-fly zone, and that would be a problem … because he would have had to shoot us. So we agreed.” 


Chris Columbus’ Pixels is a comedy that pits a handful of hapless gamers against a barrage of classic arcade game characters. Creatures like a giant Pac-Man, Donkey Kong and Space Invaders burst from the flickering CRT screens of the 1980s to the streets of modern day reality. 

Bringing these iconic characters to life in an action movie obviously presented some major challenges. 

It was up to the wizards from studios like Sony Picture Imageworks, Digital Domain, and nine other facilities, to meet this challenge head on. 

CGSociety talked to Imageworks' VFX Supervisor Dan Kramer about how they approached this unique VFX conundrum. 



A well-spent youth
When Kramer was a boy he went to the local bowling alley to play video games; armed with a pocketful of quarters ‘borrowed’ from his mom. Golden Age games like Joust, Centipede, and Pole Position. pulsated from 8-bit screens, leaving an indelible mark on the young Kramer’s imagination. 

Little did he know at the time that he was actually in intense research mode for a movie that he was going to help to make decades later.

Fast forward to the present day, Kramer and his team had the opportunity of returning to these iconic titles whilst preparing for Pixels. Thanks to the internet he could revisit the old gems in emulators and videos. They also discovered the original sprite sheets, which showed the little pixelly characters in all their possible movements; from jumping to dying.

But the sprite sheets are 2D, so how did they bring these characters into the 3D realm? 

“In the games, there’s very little detail, just a few pixels,” explained Kramer. “The characters are, of course, really low-res.” 

In designing the characters they turned to the game cabinet art, which “filled in a lot of the missing areas. You get details on, for instance, the Q-Bert character, which you really can’t get from the game itself. We would use [the cabinet art] for inspiration when we needed to add more detail to fill out some of the areas of the model that weren’t visible.”

Bring in the Voxels
So how did they get 8-bit pixel-look? 

They used voxels, or ‘volumetric pixels’, which are the three-dimensional equivalent of pixels, and are used like building blocks to form a larger 3D object.  

Kramer and his team would first build the characters as very simple models, with smooth shading and basic rigs. They would then start the voxelization process in Houdini. 
“In visual effects,” explained Kramer, “you usually use voxels for things like fluids and gases to simulate natural phenomena. So there are lots of voxelization tools already in Houdini, and we developed some more.”

This is where it gets really clever.

Imageworks developed a world space technique, where they would have a static field of voxels. This field would remain invisible until the volume of a smooth model intersected with it, which would turn those voxels on and reveal them. 

The effects department ended up deciding the voxel size in different areas. 

“They could attach and break out and use different sizes,” explained Kramer. “We found that it was preferable to keep the models as low-res as possible.” Thereby preserving their old-school charm. 

However, by making them low-res “you lose a lot of fidelity in the animation, so it was always a balancing act. Sometimes we would make a model, which was very low-res and appealing, but when we got it into animation it just didn't work. We had to go back and up-res it.” 
Other times, they found they had too much resolution and from a distance it almost looked like a smooth profile. 

“So we had to go back and reduce the resolution. It was a learning process, but over the course of building 27 different characters we got pretty good at making those decisions early on.”

The voxelization process differed with each character. For instance, with the Space Invaders they put a voxel where every pixel was and kept them relatively flat. 

“They were simplistic and where every voxel went was very art directed. The sprite sheets back in the day were also very art-directed - every pose, every pixel was placed perfectly.”
But for many of the characters, Kramer and his team added the voxels procedurally. 

“This doesn’t give you quite as much control over exactly where every voxel goes but we were able to massage it enough to make it work. We needed to voxelate characters for a wide variety of poses. We weren’t just restrained to poses found in the sprite sheet, our characters could be seen from any angle and could move in ways those sprites couldn’t move.” 

“It looks like it hurts to be Q-Bert
Q-Bert stole the show. 

“He was a much bigger challenge,” said Kramer. “Chris [Columbus] really wanted him to be a lovable character. He’s the only character in the movie that needs to emote on a much greater range. Chris wanted him to be almost like an ET type character; a buddy for the boy.” 

The first versions of Q-Bert that they showed Columbus did not make the grade. 

“We did the naïve thing of picking a voxel size,” explained Kramer, “voxelating him and putting some  light energy on him. We showed it to Chris who said ‘it looks like it hurts to be Q-Bert’. This initially threw us because we thought that hard edge was the way to go when using voxels. So, we now had the challenge of making something made out of voxels look softer. This started a long journey of look- development.”

Fortunately they had an exceptional concept artist on staff named Bret St. Clair, who did a lot of the jaw-dropping conceptual work for Edge of Tomorrow

“He grabbed our Q-Bert passes and developed many more, with different voxel sizes, different light emissions, normal passes, reflections. Brett would then take those into NUKE and combine them in different ways. We came up with literally hundreds of versions and found that using really big voxels on Q-Bert looked cute, but we lost a lot his emotion. If we went too high res, he just looked like a round sphere and lost that 8-bit charm.”

After much trial and error, they ended up building a high-res, voxelated version of Q-Bert, and then encased that in a lower res voxel shell that was double the size on the outside.
 
“We also varied the size, with more voxels around the eyes and stork, where we needed more detail. But if you look closely at the close up shots, you’ll notice that there is an outer shell that is refractive, a little bit transparent, and if you look beyond that you can see into him and there is volume in there with smaller voxels. And those are firing with their own light energy and cadence. The outer surface has a slightly different cadence and light energy. It’s subtle, but it ended up taking a little bit off the hardness off the edges by adding in that refractive quality. Also, rounded edges would pull in a little bit from the exterior environment. Those sorts of tricks were the things that softened him up.”





Lighting them up
Columbus wanted Q-Bert and the other characters to look different to LEGO; to have their own ‘light energy’. “Something that added another layer of detail,” added Kramer. “Made it feel like they were emitting light, not just plastic LEGOs.”

Of course, that works great at night time, like the really cool Pac-Man and Centipede sequences. 

“Digital Domain got most of the night sequences,” said Kramer. “And developed a system whereby they could emit light and cast light on the environment – and it looks great!” 
Kramer and his team also had some night shots but most were in broad daylight. 
“We had this extra challenge of figuring out how to convey to the audience that the characters were emitting light while in broad daylight. We found that we if we glowed the entire character and put it into daylight it basically lost its entire fill, because anywhere there would naturally be a shadow was now illuminated, and it really flattened out the character quite a bit; making it difficult to show that they were integrated into the environment.”

Everything in the environment had a very strong high key to fill value with the dark shadows and bright sun hits.  “Imagine a light bulb out in the broad daylight - it’s not that interesting.”

To solve this, Kramer and his team “found that if we illuminated only certain voxels and we left other voxels unlit and dormant they could receive proper key to fill from the environment and reflections and that helped them sit into the environment. We wanted to make sure our characters felt like they were in the real world even though they looked completely unreal. We worked on different amounts of noise, how many voxels would be lit versus unlit, the speed at which they would move, and how bright they would get. It was always a balancing act and was tweaked per shot. By surrounding that light emitting voxel with lots of dormant ones really sells that it’s lit up and gives it a reference point.”

The other issue is that the characters are completely made out of voxels, therefore are perfect squares. To combat this harshness, Kramer and his team “put bevels on the corners to catch a little bit of light, but it was very difficult to catch rims and shape on the characters.”

“If a character rotated,” Kramer continued, “and that flat surface rotated into a light and rotated past it, generally you just get a quick flash of light. It wouldn’t give you that nice rounded rim to make them feel like they are there. We ended up tweaking our normals a little bit on our characters to combat this problem, it’s something  we developed when we made Q-Bert.

They came up with a system which effectively stole some of the normals off of the smooth surface and integrated them into the voxelated surface. 

“We called that ‘hybrid normals’,” said Kramer, “because we were using a hybrid of both the geometric normals of the voxels and the smooth normals of the smooth character underneath, which were driving the voxels. We would compare its geometric normal direction  to the normal of the smooth underlying character which that voxel was associated with. If both of those normals were facing generally in the same  direction, within say 10 degrees of each other, we would take that smooth normal and copy it across to the voxel face. If they diverged too much, we would just use the geometric normal as it was. And that gave us a lot of shape, so what that did on the big broad surface on the side of Q-Bert or on the top of his head, for example,  you actually do get a little bit of rounded shape there. You are able to catch some rim and shaping, which allowed us to really hero light him a lot better.” 

“It’s analogous to something we do when we light buildings,” continued Kramer. “If you build a CG building and you make all the windows perfectly flat and facing the identical direction it looks really fake. So we add a little bit of variation, because windows are never perfectly aligned to each other in a real building. We turn the glass a little bit or we might warble them, just to break it up and add some reality. That was the technique we used on creating on the characters.”




Destroying the voxels
Creating voxels is one thing, but how did they destroy them?

There is one important sequence involving the destruction of Washington DC. It was particularly challenging because the area contains Washington Monument, National Mall, the Whitehouse, Congress, and Lincoln Memorial etc. So, you can imagine it’s going to have some major restricted air space.

In fact, there is a perimeter around the National Mall, which Imageworks managed to get permission to fly around. 

“We got a map of where that perimeter was,” explained Kramer. “Then using that in pre-vis we picked four locations where we felt we could hover the helicopter right on that line, and collect tilesets to be able to recreate the background.”

“I actually went in the helicopter,” Kramer said excitedly. ”We had a Homeland Security officer in the helicopter with us. He was a pilot too, and would not let us get as close to the perimeter as we wanted to; because he was worried that the wind would push us into the no-fly zone, and that would be a problem … because he would have to shoot us. So we agreed.” 

Unfortunately they could not get the tilesets they needed from the helicopter. Luckily, John Haley Imageworks’ DFX Supervisor had been walking  up and down the National Mall taking tilesets and lots of pictures of the monument and surrounding buildings. He was also able to go inside and take a tour to the top, where there are small windows. This was the perfect vantage spot of where their camera needed to be. 

Jeremy Hoey, Matte Painter from Imageworks, sorted through all the available data to recreate the environment. 

“It was a long, arduous process for Jeremy to go through and find all the right bits and pieces to stitch together to get a full panorama of the environment,” admitted Kramer. “He did an excellent job, and eventually we were able to create a 360 degree view of Washington from the monument.”

Building the monument model was pretty easy because it’s a simple object. However they did have to build it out of separate tiles, because it was going to be destroyed and they needed to ensure all the detail would come through. 





“We developed a system,” explained Kramer, “where when an area of a model is impacted we voxelated it and varied the voxel size based on the distance to the impact locations. When hit, it would create a very small higher-res section of voxels, which would radiate outwards from the impact point. The voxels would get a lot bigger on the perimeter of the impact, and just like our characters; it was just a lot more appealing to have bigger voxels. It was more obvious that it was digital. If we kept the voxels too small, it created a lot more ragged detail, but it also started to look a lot like traditional destruction that you see in lots of films.  It really didn't look iconic enough for the Pixels movie.” 

The character destruction was done in a similar way.

“When they got hit by the light cannons, we would sub-divide that character up so each voxel on the character when hit would get sub-divided into four. And, light energy would emit from that point and radiate outwards. As that wave of light energy passes over the character, we would start sub-dividing other voxels on the character. So, a single voxel would turn into four and each of those would turn into four new voxels, as that happened we would convert the voxels off the character and make them RBD [Rigid Body Dynamics] objects.”

“We would glue all the voxels together in that RBD system,” continued Kramer, “and given enough force the glue would break. So, rather than a character falling apart as a bunch of individual voxels, you would get chunks of voxels breaking off, which made it look more interesting. For instance, having a chest break into five big chunks with some ragged edges rather than the whole just coming apart like Dominoes.” 




The destruction system can be seen clearly in the Guam sign sequence at the very beginning (check out the trailer above). The corner gets voxelized as waves of energy pass through and you will see them sub-divide and RBD simulations taking effect as it starts to crumble apart. 

“We used that system whether we were destroying a car, character or a side a building for instance.”

Kramer and his team at Sony Pictures Imageworks made all of this integrate seamlessly into the environment. Together they brought to life a big slab of many 80’s childhoods into a VFX extravaganza. I just wish that they got to work on Adam Sandler.

Links
Dan Kramer's IMDb
Sony Pictures Imageworks
Pixels Official Movie Site