Wed 16th May 2012 | News
NVIDIA has unveiled technologies that accelerate cloud computing using the GPU.
Five years in the making, NVIDIA’s cloud GPU technologies are based on the company’s new Kepler GPU architecture, designed for use in large-scale data centres. Its virtualisation capabilities allow GPUs to be simultaneously shared by multiple users. These announcements were just part of the range of industry-shaping revelations being made at the GPU Technology Conference, just completed at San Jose this past week.
The Kepler GPU's architecture and ultra-fast streaming display capability eliminates lag, making a remote data center feel like it’s just next door. And its extreme energy efficiency and processing density lowers data centre costs.
“Kepler cloud GPU technologies shifts cloud computing into a new gear,” said Jen-Hsun Huang, NVIDIA president and chief executive officer. “The GPU has become indispensable. It is central to the experience of gamers. It is vital to digital artists realising their imagination. It is essential for touch devices to deliver silky smooth and beautiful graphics. And now, the cloud GPU will deliver amazing experiences to those who work remotely and gamers looking to play untethered from a PC or console.”
The Tesla K10 and K20 GPUs were introduced at the GPU Technology Conference as part of a series of announcements from NVIDIA.
NVIDIA developed a set of innovative architectural technologies that make the Kepler GPUs high performing and highly energy efficient, as well as more applicable to a wider set of developers and applications.
Among the major innovations are:
SMX Streaming Multiprocessor – The basic building block of every GPU, the SMX streaming multiprocessor was redesigned from the ground up for high performance and energy efficiency. It delivers up to three times more performance per watt than the Fermi streaming multiprocessor, making it possible to build a supercomputer that delivers one petaflop of computing performance in just 10 server racks. SMX’s energy efficiency was achieved by increasing its number of CUDA® architecture cores by four times, while reducing the clock speed of each core, power-gating parts of the GPU when idle and maximizing the GPU area devoted to parallel-processing cores instead of control logic.
Dynamic Parallelism – This capability enables GPU threads to dynamically spawn new threads, allowing the GPU to adapt dynamically to the data. It greatly simplifies parallel programming, enabling GPU acceleration of a broader set of popular algorithms, such as adaptive mesh refinement, fast multi-pole methods and multi-grid methods.
Hyper-Q – This enables multiple CPU cores to simultaneously use the CUDA architecture cores on a single Kepler GPU. This dramatically increases GPU utilization, slashing CPU idle times and advancing programmability. Hyper-Q is ideal for cluster applications that use MPI.
“We designed Kepler with an eye towards three things: performance, efficiency and accessibility,” said Jonah Alben, senior vice president of GPU Engineering and principal architect of Kepler at NVIDIA. “It represents an important milestone in GPU-accelerated computing and should foster the next wave of breakthroughs in computational research.”