Monday, November 15, 2010

CUDA in the cloud

Amazon have announced that EC2 now supports GPU clusters using CUDA programming. That might just be a bunch of gobbledygook so let's expand a little bit:


  • EC2 is Amazon's Elastic Compute Cloud, one of the leaders of cloud computing services.

  • GPUs are Graphics Processing Units, the specialized computers that sit on the 3D video card in your computer. Your computer arranges to have video processing done by the GPU, while regular computing is performed by your machine's CPU, the Central Processing Unit. Here's an example of a GPU, the NVidia Tesla M2050.

  • CUDA is a specialized programming language designed for the task of offloading certain compute tasks from your CPU to your GPU. It originated with NVidia but has been used for some other GPU libraries as well. Here's the starting point for learning more about CUDA.



So Amazon are announcing that their cloud infrastructure has now provisioned a substantial number of machine with high-end GPU hardware, and have enhanced their cloud software to make that hardware available to virtual machine instances on demand, using the CUDA APIs for programming access, and are ready for customers to start renting such equipment for appropriate programming tasks.

And now you know enough to understand the first sentence of this post, and to appreciate Werner Vogels's observation that "An 8 TeraFLOPS HPC cluster of GPU-enabled nodes will now only cost you about $17 per hour." Wow! Let's see, an hour has 3600 seconds, so that's about 25 PetaFLOPS / hour, so we're somewhere around 1 PetaFLOP = $1 / hour, is that right?

No comments:

Post a Comment