CGSociety :: Technology Focus
11 April 2013, by Paul Hellard
CGSociety spoke with Thiago Costa to find out what he was offering in the New Lagoa. Costa is renowned for the Multiphysics particle physics framework he had implimented into the Autodesk Softimage engine in 2010. (A link to original CGSociety article is at the foot of the page).
The New Lagoa is a Render Engine in the Cloud. You can run a global company of three, 300 or 3000 all in this Cloud. Start large scale rendering without hardware, software, collaborating on a global scale. You can share your assets, scenes and materials without having IP compromised. Lagoa gives you the ability to build your studio virtually. It no longer matters where your team members or your clients are sitting. You can invite them into the platform seamlessly to work together on designs and approvals. Lagoa is not dependent on graphics cards, operating systems, compilers, shader languages, or toolkits. There is no issue opening the same scene at the same time on a PC or a Mac.
The Lagoa Cloud Rendering service is ready to use – and is being used. Customers have 1GB of storage for free and five hours of rendering per month with a free account. Paid users have access to a more scalable service, they also have private projects and more render time (up to Unlimited per month), background rendering mode, etc.
Lagoa will license to enterprise as well as educational. For individual users, and it's licensed on a subscription with basis on render time, storage and number of users. Accounts enable privacy settings and that let's people share whatever they want to share.
After launching Lagoa MultiPhysics for Softimage a few years ago, Thiago Costa learned a lot about ways to deliver software to users. Recently, he has released to the public the New Lagoa “I realized that to be able to cope with the speed of development, we needed a more modern architecture to develop software and iterate on it. We needed to introduce new features, to make things happen in real-time for the user while doing our best to improve the quality of production tools,” he said.
“The new Lagoa came from us re-thinking from the ground up what a production tool should be for the generation we are living in. We needed more efficient ways to work, without being bound to a single desktop or limited by the compute power under our desks. I wanted something that would enable multi-user real-time interaction while still delivering physical accuracy. Clearly, this would not have been possible with existing systems. We needed the ability to work with clients, and collaborate with colleagues from all over the world in real-time without being tied to a certain place, timezone, hardware or operating system. So I hoped, at some point there would be a distributed system to make this happen. Such a thing didn’t exist so I had to build it myself. Now after several man-years of execution with a great team of engineers the new Lagoa was born.”
Co-founder Arno Zinke is the main person working on the engine. He also leads the Bonn (Germany) office with a team of researchers in the field of Rendering and Geometry. Arno is probably most known for his hair rendering work and also did some appearance modeling and rendering research. He designed integrated systems for measuring and rendering complex materials and also consulted with companies like Disney and Weta. Dov Amihod (Co-Founder of Lagoa) and the head behind the Cloud architecture says, "Thanks to the steadily improving technology of the cloud, these improvements will be available immediately to everyone using Lagoa."
While accurate rendering is important, materials are really the key. Rendering research is quite mature and in principle, given enough time/compute power, even relatively complex light transport problems can be solved. However, when it comes to structures like cloth, hair or when simulated appearance needs to match reality even the most advanced rendering algorithms alone will not help.
Many engines attempt to represent materials with a single ‘uber shader’. While this approach has its merits, the use of proper appearance models and measured data is essential for being able to simulate reality, to predict appearance. “In our case we focused on representing materials for what they are,” explains Thiago. “We use measured and accurately simulated materials whenever possible (e.g. based on Bidirectional Texture Functions - BTF for surfaces or Bidirectional Curve Scattering Functions - BCSDF for fibers) and try to make sure things look correct from an optical standpoint. As a side effect this approach will also help people spending less time fiddling with parameters to achieve a physically plausible results. This system supports ‘brute-force’ volumetric subsurface scattering materials, and it has a specialized skin shader, extremely accurate hair and fiber materials based on real-world measurements, measured car paint and cloth samples, as well as several materials for coated ceramics and metals. The LAGOA system plans to provide a complete material editing system and will offer a service for measuring custom materials.”
The renderer is progressive and supports unbiased and spectral rendering. There are some patent pending techniques to make sure it can deliver fast high quality rendering. Thanks to the steadily improving technology of the cloud, these improvements will be available immediately to everyone using LAGOA.
The product is now live. During beta, several types of customers were given access, to understand the features that were most relevant to them and what kinds of uses this system could handle. The one that resonated the most is for the visualization of models and the collaboration workflow where multiple users can look at exactly the same thing and make changes to it while using chat or Skype.
The render can take into quite large scenes, the render instances are up to 80GB of memory, so it's not like it will run out of memory easily. It is also possible to render across multiple instances. A render can run on as many cores as there are needed. Free accounts might run on a 16 cores, but there is no way to say how many cores you can use at a given time. "There is a limit of time in which we accept is the minimum amount of time you should get an update of your render and it’s measured in milliseconds," Dov Amihod adds. "This isn't traditional render farming. It's progressive unbiased rendering. Background rendering mode will be available by the end of the month and it allows you to spin a render and get it when it's ready. You can look at it at any time."