Posted on 06/19/2013 9:10:53 AM PDT by Ernest_at_the_Beach
Nvidia is working on its own implementation of a 64-bit ARM processor, code-named Project Denver, and is keen on porting its CUDA parallel programming environment to it. But if Nvidia wants to sell a lot of Tesla coprocessor cards to do the number crunching for modestly powered ARM chips, it needs to make CUDA available not just on its own future ARM processors, but on any and all ARM-based server chips.
And that is precisely what Nvidia is doing with its CUDA 5.5 tool, which was announced at the International Super Computing conference in Leipzig, Germany.
"We can't just support an Nvidia ARM platform," explains Sumit Gupta, general manager of the Tesla Accelerated Computing business unit at the company. "There are ARM CPUs coming into the enterprise market, and by next year there will be a whole bunch of them, and we want to connect GPUs to all of them. This will open up CUDA to more customers."
And help Nvidia sell more Tesla coprocessors. Perhaps a lot more.
There are many, many more ARM chips sold every year than x86 processors, if you count all of the chips sold into smartphones, tablets, and other devices, and Nvidia wants to tap its programming environment into those chips. The reason is simple: generally speaking, ARM chips suck at math relative to x86 floating-point units and certainly compared to GPUs.
(Excerpt) Read more at theregister.co.uk ...
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.