Steele consisted of 893 64-bit, 8-core Dell PowerEdge 1950 and nine 64-bit, 8-core Dell PowerEdge 2950 systems with various combinations of 16-32 gigabytes RAM, 160 GB to 2 terabytes of disk, and Gigabit Ethernet and SDR InfiniBand to each node. The cluster had a theoretical peak performance of more than 60 teraflops. Steele and its 7,216 cores replaced the Purdue Lear cluster supercomputer which had 1,024 cores but was substantially slower. Steele is primarily networked utilizing a Foundry Networks BigIron RX-16 switch with a Tyco MRJ-21 wiring system delivering over 900 Gigabit Ethernet connections and eight 10 Gigabit Ethernet uplinks.
The first 812 nodes of Steele were installed in four hours on May 5, 2008, by a team of 200 Purdue computer technicians and volunteers, including volunteers from in-state athletic rival Indiana University. The staff had made a video titled "Installation Day" as a parody of the film Independence Day. The cluster ran 1,400 science and engineering jobs by lunchtime. In 2010, Steele was moved to an HP Performance Optimized Datacenter, a self-contained, modular, shipping container-style unit installed on campus in order to make room for new clusters in Purdue's main research computing data center.
Funding
The Steele supercomputer and Purdue's other clusters were part of the Purdue Community Cluster Program, a partnership between ITaP and Purdue faculty. In Purdue's program, a "community" cluster is funded by hardware money from grants, faculty startup packages, institutional funds and other sources. ITaP's Rosen Center for Advanced Computing administers the community clusters and provides user support. Each faculty partner always has ready access to the capacity he or she purchases and potentially to more computing power when the nodes of other partners are idle. Unused, or opportunistic, cycles from Steele are made available to the National Science Foundation's TeraGrid system and the Open Science Grid using Condor software. A portion of Steele also was dedicated directly to TeraGrid use.
Users
Steele users came fields such as aeronautics and astronautics, agriculture, biology, chemistry, computer and information technology, earth and atmospheric sciences, mathematics, pharmacology, statistics, and electrical, materials and mechanical engineering. The cluster was used to design new drugs and materials, to model weather patterns and the effects of global warming, and to engineer future aircraft and nano electronics. Steele also served the Compact Muon Solenoid Tier 2 Center at Purdue, one of the particle physics experiments conducted with the Large Hadron Collider.
Unused, or opportunistic, cycles from Steele were made available to the TeraGrid and the Open Science Grid using Condor software. Steele was part of Purdue's distributed computing Condor flock, and the center of DiaGrid, a nearly 43,000-processor Condor-powered distributed computing network for research involving Purdue and partners at nine other campuses.
Naming
The Steele cluster is named for John M. Steele, Purdue associate professor emeritus of computer science, who was involved with research computing at Purdue almost from its inception. He joined the Purdue staff in 1963 at the Computer Sciences Center associated with the then-new Computer Science Department. He served as the director of the Purdue University Computing Center, the high-performance computing unit at Purdue prior to the Rosen Center for Advanced Computing, from 1988 to 2001 before retiring in 2003. His research interests have been in the areas of computer data communications and computer circuits and systems, including research on an early mobile wireless Internet system.