Tue 8th Jul 2014, by Rory Fellowes | Peoplestudios
Telling stories. Surely one of the most important among all those things that make life worth living, is the art of creating fiction, be it books, films, TV or the latest iteration, video games. What would life be like without the entertaining escapes that fiction provides? Creating stories by any medium, that the audience (we’re all part of the audience), can engage with, be moved by, even learn by, as if they are true and personal. That is what we who work in the media industries do for a living.
We tell stories.
As in all forms of fiction, in the creation of moving images we try to make it seem as if that world there on the screen really exists and matters. Sometimes because it is the real world we’re looking at (traditional cinema and TV), or it may be a virtual world (fantastical movies or reality games), or the style may be cartoony (arcade games and cartoons, graphic novels and so on). Whatever the medium, in all cases we hope the audience is willing to suspend its belief so as to allow those characters there on the screen to have identities and lives to live.
Being part of the process of making this happen is what practical and virtual FX is all about. It is what acting, in which I include animation, is all about. As Matt Damon once said, “My job is to make it real.” That’s our job too.
Games used to ignore the subtler aspects of storytelling (especially the acting!), and they struggled to make the look as good as movies, but these walls are falling now. We are at the beginning of a revolution, a time of Disruptive Technology, a revolution in movie media making to match the last great disruption, when Computer Graphics Imagery first came into being.
Visually and in narrative creation, the worlds of film and games are converging. In this article I will address the visual aspect of this convergence, in the next article I will be focusing on new developments in narrative, where I have come across some unprecedented ideas and techniques that are emerging in this new world of media delivery and reception.
Apart from one year when I was the Animation Lead (i.e. the only animator) in a (small and now defunct) Games company up in Derry in Northern Ireland, my background is all in film, for cinema and television. I watch movies and TV. I don’t play video games. I gave the team up in Derry a lot of good laughs when, for instance, I struggled to get past the first two guards in Crysis. Eventually I got up a few stages, but it took a long, long time...
I bought my first computer when I was in my early thirties, so they are still a kind of miracle to me. But I always understood the impact they would have on our culture, our society, our world.
When I was trying to get into the CG industry back in the early 1990s, it was clear to me that games would be the main driver of the development of computer graphics technology. This was partly because the games industry’s audience was young, and (producers take note) it has long been my opinion that if you want to market research the next decade or so, talk to kids aged between five and fourteen, because it will be those generation’s consumer demands, their expectations when they join the great consuming masses, that will drive the direction of any aspect of society’s development. And as stated in the title of this article, if it can be imagined, it can be made. Those kids are imagining their little heads off right now. In the modern, technologically advanced, secular world, everything is up for grabs.
I also thought games would take the lead in developing and exploring the technology of CGI because it was a new industry, unconstrained by the old men and the old ways, old traditions, that were likely to hinder the film industry’s efforts to grasp the possibilities of CGI and make use of them. I have discovered in the conversations I have been having on behalf of CGSociety, it is their interest in the technology that played the main part in what first excited and motivated the people who work in the CGI industries. They just love to mess with computers!
So what happens next? What might we see in the not too distant future?
I turned first to Cevat Yerli, the CEO of Crytek, the long standing games and game engine developers, and most recently, makers of the excellent Ryse:Son of Rome. I had attended a talk that Cevat presented at FMX 2014, “Creating Emotional Cinematic Experiences In A Real-Time Environment” and was keen to follow up on what he had talked about then.
Cevat founded Crytek about 15 years ago. “I started the company as a hobby project, and then, when it looked like it could be something of a business or I could actually earn money from my hobby, then I asked [my brothers] to join. Faruk joined first and then Avni joined about a year later. And then, in essence, we started Crytek in the form of how it is today.”
Faruk Yerli, Cevat Yerli, Avni Yerli
I have to say this arrangement impressed me on two counts. First, the Yerlis as brothers in arms. In my own family I doubt we could run a bath together without arguing over it, but clearly this Turkish-German family works well together.
Second, the fact that Cevat, as he told me, started out just having fun, working for nothing more than his own amusement. It reminded me of Peter Mitev and Vlado Koylazov starting the Chaos Group while they were still in college. This is a common trait in games companies, and back in the 1980s and 90s it was the same for CG VFX companies. Friends got together and programmed stuff on home computers and it all grew from there. It was much the same for the film industry back in the first couple of decades of the last century, before the money got big and businessmen and their corporate methods took over. The difference now is that in most if not all the games companies I have come across over the last twenty years it is the creators who still run the business. The result of changing times and more business savvy heads on CGI artists, I guess. Those brains being trained in the wonderful, mystical (to me, at least) world of mathematics probably helps...
“I was always fascinated by the technology,” Cevat said. “So if I say my primary goal was to make games, I always wanted to make [games that were] technologically more advanced than what the market was giving. So back then I played all the games, I said “Eh, there must be a better way to do this.” And then I wondered what kind of different experiences can we achieve with a different approach to the technology. That was always the sense of how to look at it. The heart of it.”
I asked what he meant by a different approach.
“Well, for example, when we started making Far Cry there was a certain goal of what we wanted to achieve. We wanted more open, wider environments, and I wanted to have daylight and sunlight, that I had just seen at this point [during] a vacation. I was in the Maldives and I said, “Hey, it would be great to have this kind of environment for a game”, in an ironic way because it was very peaceful and I said this could be interesting as a combat area.”
When we spoke recently, Louise Ridgeway of Rare Ltd reminded me of that company’s breakthrough success, Goldeneye 007, released in 1997 (I’ll be reporting on that conversation in the next article).
Now look at Far Cry, released in 2004.
“At that time all the games were close quarters, in corridors and dark environments, because in dark environments you could hide detail. You don’t have to make everything look nice because it’s dark. In brightness you can’t hide, you have to be beautiful in all aspects. So technologically, in 2000, 2001, making a game that was bright, open kind of thing, was technically a very demanding approach and we had to [find] an efficient way of doing it. Now to be a little bit technical about it, all other engines at that time were using techniques such as BSP Rendering or Portal Rendering. We introduced the idea of a quadtree renderer which allowed us to have vast landscapes, and our engine was based on a height map renderer, so height maps for terrain generation and quadtree for structuring the height map so that so we can get automatic LODs (Level of Detail). So that approach alone has allowed us to make terrains that are kilometres, miles wide in each dimension as opposed to BSP Rendering or Portals, which were designed only for rooms with doors, and couldn’t give you anything with outdoors.”
I recalled that when I was in that games company playing Crysis the veteran gamers surrounding me all pointed out the light and the detail in the foreground and the extent of the environments. The shoot ‘em up aspect was almost a secondary issue. There were plenty of FPS games out there, but as Cevat said, all set in dark arenas, Goldeneye 007, Doom and suchlike. The excitement was the field of fire you could command (and so could the AIs, which made staying alive far more difficult. Well, that’s my excuse).
Cevat explained, “Largely this comes from the way we approached the technology from the get go, as in like, we just said, “OK, BSP and Portals are going to die out”. BSP Rendering and Portals were introduced through Wolfenstein. Portals were introduced mainly through the Unreal Engine. And still to this day they are using this in simplified form because back then BSP and Portal rendering were to optimise software rendering, as in pure software rendering without any GPU assistance.
“So what we have done is, we have used software rendering but with some kind of CG accelerators, and utilised a bit more mobile approach. Today the foundation of CryEngine is pretty much still the same foundation, a Height Map renderer, with quadtree structures, but they are heavily optimised for more detailed data structures. Now we also have some kind of voxel structures added to it to allow 3D terrains, not just 2D Height Maps, so like, terrain that has a cave and other things and improve the data structure so that [you can have] millions of [plants, leaves, grass and trees and so on]. To support those we have refined data structures from Far Cry to Crysis, right up to today. Crysis was the first game where we introduced a data structure that allowed us to have dynamic plant life, truly dynamic plant life, as in destructible trees and physical vegetation. All this was pretty much not possible back then with BSP and Portal, and that’s why at that time we were one of the leaders in that area.”
Which brings us to Ryse: Son of Rome, Crytek’s latest release and a giant leap forward in the look and feel of the gaming environment. If film and games are converging, in terms of visual quality, graphical quality, this is nowhere better demonstrated than in this gorgeously flamboyant story of a man of destiny, a Roman General returning to his beloved Rome with justice and revenge in mind. The interesting thing (for a movie maker like me) is the amount of time and attention that has been given to making the characters’ performances authentic and engaging.
Needless to say, it is gory and the gameplay is fast and furious, though there is also dramatic timing and passion in the performances. “In Ryse we did go over the top; actually not as over the top as it could have been! When I compare now the latest 300 movie [Rise Of An Empire], then we are quite harmless.” He laughs. “But anyway, the context of the setting, the context of [ancient] Rome, and the reality that you are fighting with swords and other sharp killing devices, in a sense, it wouldn’t have worked in the style and the proximity of the experience, it would have not worked to be more tame about it. Actually, it was counterproductive if we did. We tried that. And likewise, it felt too much over the top if we did more than that. So it felt just like it is the right balance to be realistic, and be about Rome, and the old Games of Rome and the true life of Roman soldiers or gladiators. That was the primary focus, to really reflect as much as possible, authentically, or believably, plausibly, what it is like to be in the shoes of a guy like our hero Marius Titus, a General in the Roman Army.
“My take on violence is that it should never be the primary element. Violence should be aesthetic, that supports the narrative or the mission or the objective and if it is not needed, it’s not needed. It’s a tool, it’s nothing else. It’s not a goal. It’s a part of the solution, a means to an end, let’s say. If you look at Crysis for example, we have what I would call an hygienic approach, a very clean approach and most of our games are like that.”
“The primary goal here [at Crytek] is to have the gaming world learn from the movie world. When I gave a talk for the first time to moviemakers, a couple of years ago, the essence of it was that moviemakers have peeked at the games industry and have looked at how to create worlds and actually, how to create a design IP in a sense, in a different way than they used to do. They used always to create a screenplay and then you create a film out of it, but when you look at productions such as John Carter or Avatar or Alice In Wonderland, there is much more focus, actually key focus on world creation. And for us, that’s always the case, in the games industry. But what movies have always done better is to tell a drama, narrative, characterisation, and that’s what we focused on, tried to understand, how we can do that with CG assets or CG graphics in a better way.
“And then I was exposed for the first time, in 2010, to [James Cameron and Larry Kasanoff’s production company] Lightstorm’s visual production of Avatar. I was in a, I will say, lucky position to have been [able to learn] how they did the entire production, from Virtual Production to what they called their “templates”. And then how they went from templates to final rendering, and how the entire project was [achieved with] motion [and] performance capture. We were actually doing something very similar already but not to the same degree.
So then we expand our pipelines, which eventually become the pipelines for Ryse, to introduce for the first time, virtual production, first time to introduce performance capture into the gameplay not just cinematics. The entire game was performance captured, and virtual production-based developed.”
Virtual Production is the cornerstone of the way Disruptive Technology is transforming the film industry. The key elements were first developed by Lightstorm and WETA in the making of Avatar. In other versions of Virtual Production that I have seen so far, the previsualisation is viewed on iPads or similar devices, feeding off inputs from the various sources (as in Kevin Margo’s making of short about his film Construct ), but I had spoken to Jon Landau at FMX 2014 in Stuttgart, Germany and he told me he didn’t refer to previs any more. “We call it visvis”, he said, because their version of the concept is an in camera view, where the director can see his virtual characters in the context of the virtual world to which they belong, directly in the eyepiece of the camera.
Cevat credits the production team on Avatar with inventing Virtual Production. “They put it in a shape and form that people started using across the world. From an outside perspective, when you watch Avatar for the first time you don’t realise what happened there exactly. Once you look behind the scenes you realise that no-one had done anything like that before. For us, for cut scenes in games, [in the past] we assembled them in a very rudimentary form. We captured one actor at a time, all the action of the body. Then you had the voice actor, you animated the face, and you were lucky to get a believable scene out of it.
“With Ryse we actually made it like a theatre play. We put the actors into a volume [motion capture studio] and then we just said, “OK, Go,” in one flow. We had a virtual camera guy and we had the actors running in and when the actors had performed their roles, we either reshot the v-cam if needed or we kept the v-cam that we had shot, because the data was in 3D space and the performance was captured, everything was captured in one take, which vastly improved the outcome of not only our storytelling, but also it was pretty much the same process that Avatar used three years earlier than us in their production.
Meanwhile, games are moving into new platforms, beyond the PC and console where they have grown up over the last twenty years or more. I spoke to some games industry professionals for these two articles, two of whom I met at FMX 2014 in Stuttgart last May, and one whom I have known for a few years as a friend of friends of mine, meaning we have only met online, but heck, this is the 21st Century, we can make friends online, can’t we?
The first I talked to was Iman Mostofavi. Along with Volker Schoenfeldt and Arash Kashmirian, Iman is one of the founders of Limbic Software Inc., a fairly young company based in Palo Alto in California, but with a worldwide network of artists working for them.
Volker Schoenfeldt, Iman Mostofavi, Arash Kashmirian
Limbic focuses primarily on mobile games.
“Our games are in a variety of genres,” Iman told me, “from strategy gaming to action gaming, and even a few children’s games. However, on the mobile platform, the specification of the hardware is much more limited than console and PC, so we definitely have fewer resources to work with, and as a result the games are in general more simple and limited in scope.
“And also the demographic of the types of players who play games on mobile. [They] are more interested in a shorter gameplay experience than when they sit down and turn on their Playstation and wait for it to do all the updates and so on. Definitely the session lengths are much shorter on a mobile device. As a result we have to design our games for these smaller bite-size sessions and gameplay experiences.”
It occurred to me the Cloud could overcome some if not all of those limitations. Iman agreed, with reservations.
“Sure. There are companies such as onLive [a Cloud streaming games service]. They send a video stream to the iPad [or whatever platform you have] and the iPad becomes merely a controller and monitor that allows the user to play the game with really amazing immersive graphics on a simple mobile device. Yes, it is possible. I’m just not sure if the market is there yet so it would be hard for a company to get the return on their investment making such a game, only targeting a mobile audience. There isn’t any precedent for that, as far as I know.”
I have friends who are relying on their iPads more and more, using their PC almost only as a server. The move to mobile devices as the platform of choice seems to me the way the audience is going. I suspect they will start to want to play fully immersive games on their iPads just because of their habit of having their iPads always open in front of them.
Iman agreed that was a possibility. “Absolutely. Habit is important. I think if you measure the amount of time people spend with any given device it’s clear time spent with tablets and phones is dominating, it’s taking away time spent with traditional sources such as television, and even console gaming. I’ve heard of people who actually play on their iPads while they’re waiting for their consoles to load, or download something, or update something. So you’ll see both the mixed use, in that scenario as well as people who play long, multi-hour sessions on an iPad. It really depends on the game and the gamer. There’s the whole “casual” audience which are interested in lots of simple games. There is an audience for story-driven content. Telltale Games’ The Walking Dead was one of the first really successful games I’ve seen do that on mobile, where there is an interactive component that had not quite been pulled off on mobile platforms before. So, you’ll probably see more of that since, as far as I can tell, it was a success. I hope they will follow up on that.”
I asked Iman to speculate about where the technology might go next.
“As far as the mobile development world is concerned, I would say games that take advantage of Augmented Reality and Virtual Reality. For example, Google has this project called “Tango”.
“[Project Tango] is all about new types of sensors, new types of cameras that are being placed in upcoming mobile phones, and eventually they will be in iPads as well. Those new types of cameras allow for new types of gameplay as they are cameras which process depth information, kind of like Microsoft Xbox Kinect, how the Kinect works, only brought to a mobile device. If you have that type of device on your phone or on your iPad it opens a whole new realm of possibilities for the types of games you could play outdoors.”
“Right now it’s anyone’s guess what type of creative games could be created with the ability to have a precise depth map of the world around you, which has not been possible before with just the regular colour cameras that are on cellphones today. That’s something that we’re excited to play with. It’s probably still going to be a few years out before those types of phones with those type of sensors are mainstream, but that is where definitely new kinds of gameplay experiences will be developed.
“On consoles and PCs you will see animations improving, immersive graphics improving, it will become hard to even tell it is a video game anymore, it will seem more and more as if it is live action, real life acting. You will see gaming and movies merging more and more into one. Already it is harder to tell them apart, if you play some of the more recent games made by Rockstar, like Grand Theft Auto. Those games are practically movies. They have the budgets of blockbuster movies, they have as many people working on them as those kinds of movies, and often they have the same quality of actors and voice acting, and stories being written, and they’re essentially merging into one type of thing. Just different flavours of the same kind of [media experience]. One is more interactive, one is meant to be consumed while sitting in your chair eating popcorn, the other you’re holding a controller, you can direct the action a little bit.”
My brother David was involved in computer development back in the 1980s, and I remember him then talking about ‘outline recognition’, getting a computer to see a moving figure against a static background, and what a problem that was for a computer. Since then OrganicMotion, and Kinect and other markerless systems have come along and progress in the last couple of years has been rapid. But all of the current systems rely on the static world remaining static in the camera view.
I asked Iman how could a mobile device distinguish the movements of living creatures from the general movement of all the elements in the picture if you have a handheld camera, as you would with a mobile phone or handheld iPad?
“It is a very challenging problem. We take it for granted that our human eye is able to work in varied environments and track different objects and just do very simple things whereas it’s actually a very challenging task for a computer. Kinect has a much more simplified problem in that it is working [from the console’s static point of view] in a static environment, in someone’s living room, in which the only changing variable is the people moving around in front of the Kinect.
“On a mobile device if you’re able to completely change the environments that the device is in you will definitely find cases where the technology will break, where it will be unreliable. That’s why I said it is a pretty early stage thing, but if you constrain the types of things you’re trying to do enough then it is something that can be used today.
“Also, the way they will make it reliable is they will have multiple types of sensors, more than just would be on a Kinect. I believe they also have gyro sensors, and compasses and all kinds of other sensors to help orient the device, along with Google maps, which it uses so it knows where you are, and maybe it will know some landmarks. They try to integrate as much of that information [into the programme].”
We talked about the kinds of games that could be created using such a platform, running down the street pursuing a virtual enemy you can see but no-one else can (they’ll think you’re mad!); or in your home or office.
“Basically, you can create an interactive game taking place within the world you’re in. You could bounce a virtual ball against a real wall, or if it could detect a window or a door you could have zombies or aliens invading your space, and you could have a virtual defensive weapon and you could aim and shoot, and that’s another game idea right there. We’ll see a lot of these obvious ideas, [like] the ones we just came up with right off the top of our heads in the last couple of minutes!”
Up until now I have been thinking about the convergence of film and games in terms of the visuals being photorealistic, but this is no more the whole story than it would be to say film is always live action based. Feature length animation is an obvious example of where film goes in a different direction, both in 2D and 3D but still sets out to tell stories which have emotional meaning. The same goes for games.
Telltale Games has a reputation for narrative games based on TV and Film USPs. I spoke to Dennis Lenart about their productions. Most of our interview will be included in the next article because we talked largely about narrative, but I am reminded here that Telltale does not try to create real world simulations. They deliberately set out to use graphic styles. They rely on animators and their own proprietary toolset to create the performances in their games, whereas, it seems to me, the general inclination in the movie and games industries is towards motion and performance capture. The Walking Dead is probably Telltale’s most popular game right now, but I suspect The Wolf Among Us is going to be right up there with it, if it isn’t already.
The point here is that although this is a highly graphical style of image, the movements, and more importantly, the performances and the story, the characters and their emotional lives are all intended to engage the audience as if they are real, and their stories are meaningful.
Louise Ridgeway told me that she cried when she reached the end of the first series of The Walking Dead. I’ve cried at a lot of films, we expect to do that, don’t we? But for this to be now a possibility for games, even games with cartoon characters, is to my mind a breakthrough with ramifications for the future of storytelling in visual media.
But that is something I will discuss in my next article!