The University of Illinois is home to yet another Mac supercomputing cluster, AppleInsider learned.
The cluster uses wired Ethernet networking routed by full-duplex 100Mb switches from Cisco Systems. It also features a 1Gb link between the front-end array and the primary Cisco switch, and runs a version of Apple's Mac OS X Server 10.3 "Panther" operating system.
Until recently, the University's Turing Cluster ran version 7.2 of Red Hat Linux using only 208 dual-processor machines, which were supplied by both Dell and Hewlett-Packard. But last summer, Michael Heath, Director of Computational Science and Engineering, saw that the Intel-based cluster had reached the end of its life span in terms of reliability. Additionally, the cluster was no longer generating interest with its aging computational ability, he said.
Heath set out on a mission to revamp the Turing Cluster into a facility that would be open to local users, scientists, and the University's student population of over 35,000. He presented his endeavor to several corporations, of which, he said, Apple responded quite eagerly.
After generating the necessary funding through donations from over a half-dozen University departments — including the Beckman Institute, College of Engineering, and Department of Computer Science — Heath and a team of supporters began nailing down the details of the cluster with Apple. By October 2004, they had constructed a 64-node mini-cluster in a temporary server room as a small-scale model of the entire system.
Speaking with AppleInsider this week, Heath revealed that the full-scale 640-node Xserve G5 cluster has been fully functional for about a month now, but is not yet sufficiently stable. He said it would take approximately one more month to stabilize the cluster, acquire a few missing parts, and exchange a couple of Xserve G5 units, which were dead-on-arrival.
The cluster is housed in a newly renovated server room that can handle a cooling load of up to 550,000 BTU per hour and supports over 45 tons of cooling capacity using four distinct cooling systems — three of which can adequately support the cluster at any given time.
In the near future, the University hopes to be able to perform tests to rank the cluster's computational ability, which, according to its specifications, could peak at nearly 10 teraflops.
Heath declined to provide specifics of the University's arrangement with Apple, but a web page dedicated to the new Turing Cluster says Apple provided the hardware under a combination purchase/donation agreement.
And while Health conceded that there is no formal upgrade plan for the future of the cluster, he said his team has been seriously considering expanding the cluster to 1280 nodes, double its current size.
21 Comments
Cool. FYI, I've been keeping a list of Mac supercomputer bookmarks. Xserve clusters seem to be big lately. Price/performance, reduced heat output, and UNIX with easy management are often cited as reasons.
There are now more than a dozen Mac OS X / Xserve clusters of 32 processors or more--eight with 200 processors more, four with 1000 processors or more--topping out at 3,132 for MACH 5:
* 1280 processors - U. of Illinois' "Turing Cluster"
(640 dual 2.0 nodes, possibly to be doubled, used for a range of academic research and replacing a Dell/HP Linux cluster)
* 250 processors - U. Pitt's Human Genetics cluster
(125 dual G5 nodes used for genetics research)
* 200 processors - GeoCenter cluster
(100 dual 2.0 nodes used for seismic data processing)
* 1344 processors - French CGG cluster
(672 dual nodes, integrated into an existing 40 TFLOP cluster for oil prospecting)
* 48 processors - Louisiana State's "Nemaux"
(24 dual G5 nodes with Xgrid, used for 3D animation, audio, and scientific computing)
* 2200 processors - VA Tech's "System X" aka "Big Mac"
(1100 dual 2.3 nodes and counting, used for a range of academic research)
* 256 processors - UCLA's "Dawson"
(128 dual 2.0 nodes, used for plasma physics research)
* 32 processors - Australian Defence Force's "Checkmate"
(16 dual nodes, used for command and control simulations)
* 86 processors - UNC's cluster
(43 dual nodes, used for proteomics research)
* 76 processors - UC Davis's cluster
(38 dual nodes, used for Genome Center research)
* 72 processors - UC Santa Cruz's cluster
(36 dual nodes and counting, used for a range of academic research)
* 3132 processors - US Army's "MACH 5"
(1566 dual nodes, used by the Army and NASA for hypersonic flight research)
* 512 processors - U. Maine's "Baby MACH 5"
(256 dual nodes, used for software development and optimization for MACH 5)
Also, the US Navy is using Xserves on board submarines, to run their Linux-based sonar imaging system.
And VA Tech is considering much larger Mac clusters in future: "System L" and "System C."
Whoa, go Apple!
Doing the math... 9488+ G5s!
Screed
Hey, thats my college!!!
9,488 G5 chips... and IBM has only made 10,000!
It all makes sense now...