Affiliate Disclosure
If you buy through our links, we may get a commission. Read our ethics policy.

Nvidia working on first GPGPUs for Apple Macs

Graphics chipmaker Nvidia Corp. is in the early developmental stages of its first Mac-bound GPGPUs, AppleInsider has learned.

Short for general-purpose computing on graphics processing units, GPGPUs are a new wave of graphics processors that can be instructed to perform computations previously reserved only for a system's primary CPU, allowing them aid in the speed of non graphics related applications.

The technology — in Nvidia's case — leverages a proprietary architecture called CUDA, which is short for Compute Unified Device Architecture. It's currently compatible with the company's new GeForce 8 Series of graphics cards, allowing developers to use the C programming language to write algorithms for execution on the GPU.

GPGPUs have proven most beneficial in applications requiring intense number crunching, examples of which include high-performance computer clusters, raytracing, scientific computing applications, database operations, cryptography, physics-based simulation engines, and video, audio and digital image processing.

It's likely that the first Mac-comptaible GPGPUs would turn up as build-to-order options for Apple's Mac Pro workstations due to their ability to aid digital video and audio professionals in sound effects processing, video decoding and post processing.

Precisely when those cards will crop up is unclear, though Nividia through its Santa Clara, Calif.-based offices this week put out an urgent call for a full time staffer to help design and implement kernel level Mac OS X drivers for the cards.

Nvidia's $1500 Tesla graphics and computing hybrid card released in June is the chipmaker's first chipset explicitly built for both graphics and high intensity, general-purpose computing.

Programs based on the CUDA architecture can not only tap its 3D performance but also repurpose the shader processors for advanced math. The massively parallel nature leads to tremendous gains in performance compared to regular CPUs, NVIDIA claims.

In science applications, calculations have seen speed boosts from a 45 times to as much as 415 times in processing MRI scans for hospitals. Increases such as this can mean the difference between using a single system and a whole computer cluster to do the same work, the company says.