Shodor

a national resource for computational science education

HOME UPEP Shodor Blue Waters

Supercomputing Sites
Calculating...
Grid Computing  (...)

OSG is a network of scientific computing resources devoted to furthering scientific discovery. A large portion of this mission consists of processing data from CERN's LHC.


TeraGrid is a network of eleven sites combining their HPC resources to form a high performance grid for open scientific research.


National Labs  (...)

Ames National Laboratory has a number of research clusters of varying designs created in a partnership with Iowa State University. The different clusters have varying software packages and access policies.
Resources


Argonne National Laboratory has a pair of IBM Blue Gene/P supercomputers (Intrepid and Surveyor) as well as a GPU-based visualization machine (Eureka). Intrepid has 40,960 nodes and ranks #8 on the Top500 list with 458.61 teraflops performance on the Linpack benchmark and a theoretical peak of 557.06 teraflops. Surveyor has 1,024 nodes with a theoretical peak performance of 13.9 teraflops.
Intrepid's Top 500 page
Compute Resource Information
Software
How to Use
Allocations
Internships


Brookhaven National Laboratory has an IBM Blue Gene/L system named New York Blue/L which ranks #58 on the Top500 list. In addition to the 18-rack Blue Gene/L system, there is a 2-rack Blue Gene/P (New York Blue/P) system.
New York Blue/L's Top500 Page
Getting Access
New York Blue/L User Guide
New York Blue/P User Guide
New York Blue/L Software


Fermilab has a number of clusters that are used internally for research, including one Koi Computers system, J/Psi, ranked #141 on the Top500 list.
Internships
J/Psi's Top500 Page


Idaho National Laboratory possesses Ice Storm, a SGI cluster which was ranked 470th on the June 2009 Top500 list, but is no longer ranked as of the November 2009 list.
Ice Storm's Top500 Page
Internships


LBNL has a Cray XT4 supercomputer named Franklin which is ranked #15 on the Top500 list. Additionally, LBNL has multiple additional clusters plus a Cray XT5 system named Hopper which is currently under construction.
Franklin's Top500 page
Supercomputing Resources at LBNL
Interships


Lawrence Livermore National Laboratory possesses 3 IBM supercomputers that rank in the Top 500: ASC Purple at #66, Dawn at #11, and Blue Gene/L at #7. LLNL also has 5 Appro supercomputers in the Top 500: Juno at #27, Hera at #44, Graph at #57, Atlas at #150, and Minos at #272. Additionally, LLNL also possesses several other clusters that do not rank on the Top 500.
Blue Gene/L's Top 500 Page
Dawn's Top 500 Page
ASC Purple's Top 500 Page
Juno's Top 500 Page
Hera's Top 500 Page
Graph's Top 500 Page
Atlas' Top 500 Page
Minos' Top 500 Page
LLNL Supercomputing Resources Page
LLNL Supercomputing Allocations Page
How to Run Jobs on LLNL Resources
Internships


Los Alamos National Laboratory possesses two computers in the Top 500 list: Roadrunner (#2) and Cerrillos (#29). LANL also has some additional clusters.
Roadrunner's Top 500 Page
Cerrillos' Top 500 Page
HPC at LANL
Internships


NETL has three clusters, none of which rank in the Top 500. One of the clusters uses 256 Intel Xeon processors with a gigabit Ethernet interconnect. LINPACK performance is rated at 961 gigaflops, with a theoretical peak of 1,567 gigaflops. The other two clusers use AMD Opteron processors and have lower performance.
NETL's Clusters
Internships


NREL has a pair of supercomputers. One is an Altix system that consists of 24 Intel IA64 processors, while the other is a Linux cluster which has 140 nodes with dual AMD Opteron processor nodes.
NREL Computing Resources
Internships


New Brunswick Laboratory does not publicize information on their high performance computing resources.


ORISE does not publish information on their high performance computing resources.
Internships


ORNL is houses multiple powerful supercomputers, five of which are on the Top 500 list. The computers on the Top 500 list are Jaguar XT5 (#1), Kraken (#3), Jaguar XT4 (#16), Athena (#30), and an IBM Blue Gene/P (#379). Jaguar XT5 and Kraken are both Cray XT5 systems, while Jaguar XT4 and Athena are Cray XT4 systems. Kraken and Athena are operated in conjunction with NICS and the University of Tennesse. In addition to the five Top 500 computers, ORNL has many smaller high performance computing systems.
Jaguar XT5's Top 500 Page
Kraken's Top 500 Page
Jaguar XT4's Top 500 Page
Athena's Top 500 Page
Blue Gene/P's Top 500 Page
NCCS Computers at ORNL
NICS Computers at ORNL
Internships


PNNL has multiple clusters including two from SGI Altrix, one Cray XD1, and an IBM Power-5. None of PNNL's computers rank in the Top 500.
PNNL's HPC Site
PNNL Internships


PPPL does not publish any information about their supercomputing resources.
PPPL Internships


RESL does not publish any information about their supercomputing resources.


Sandia has two computers on the Top500 list - Red Sky at #10 and Red Storm at #17.
Red Sky's Top 500 Page
Red Storm's Top 500 Page
Internships at Sandia


SREL does not publish any information about their supercomputing resources.
Internships at SREL


SRNL does not publish any information about their supercomputing resources.
Internships at SRNL


SLAC does not publish any infomration about its supercomputing resources.
Internships at SLAC


TJNAF does not publish any information about its supercomputing resources.
Internships via DOE


Supercomputing Centres  (...)

TACC has 3 HPC installations, 2 of which are in the top 500. Ranger is TACC's highest ranking computer at #9, and is followed by Lonestar at #105. The third system, Stampede, is a 1,736 node Linux cluster. Each node contains two Intel Clovertown quad core processors, which deliver a peak performance of 16 teraflops. TACC also has several visualization machines. Ranger's Top 500 Page
Lonestar's Top 500 Page
TACC HPC Resources
Software Available on TACC Computers
TACC Training
TACC Allocations


OSC has two HPC clusters: Glenn, and OSC BALE. Glenn is ranked at 107 on the Top 500 list, while OSC BALE does not rank. OSC BALE consists of two subclusters - an eighteen node visualization cluster and a workstation cluster. The visualization nodes each have two dual-core AMD Opteron CPUs at 2.6GHz, and two NVIDIA Quadro FX 5600 graphics cards. The workstation nodes each have a single AMD Athlon X2 4200+ dual-core processor. Both the visualization and workstation machines use Infiniband as their interconnect.
Glenn's Top 500 Page
OSC Hardware
OSC Software
OSC Training
OSC Accounts


OSCER has one dedicated cluster, Sooner, and a Condor pool. Sooner is ranked #252 in the Top 500 list.
Sooner's Top 500 Page
OSCER Hardware
Sooner Hardware Details and Software
Accessing OSCER Resources


IU has two clusters - Big Red and Quarry. Big Red ranks at #452 in the Top 500, while Quarry does not rank. Quarry consists of 140 IBM HS21 Blade servers with two quad-core Intel Zeon 5335 processors per node. The system uses gigabit Ethernet for its interconnect and delivers 8.96 terflops.
Big Red's Top 500 Page
Big Red's Hardware
Big Red's Software
How to use Big Red
Quarry's Hardware
Quarry's Software
How to use Quarry
Allocations via TeraGrid


Purdue's Rosen Center for Advanced Computing (RCAC) has numerous HPC installations, including Steele, which ranks at #277 in the top 500 list.
Steele's Top 500 Page
RCAC Resource
RCAC Training
RCAC Software


LONI brings together several Louisiana universities' HPC resources into one network. The network includes multiple small clusters as well as Queen Bee, which is ranked #163 on the Top 500.
Queen Bee's Top 500 Page
LONI Resources
Accounts with LONI
LONI/LSU Training
Software


LSU has four HPC systems in addition to the machines it houses that are a part of LONI. None of the systems are fast enough to rank in the Top 500
LSU Systems
LSU Software
LSU Training
LSU Accounts/Allocations


CAC has numerous computing resources, some of which are made available through TeraGrid.
CAC Resources
V4 Linux Software
V4 Windows Software
CAC Training
How to Start a CAC Project


NCSA is the future home of the Blue Waters supercomputer. Currently, however, it is home to 4 HPC clusters, one of which ranks in the Top 500. The ranking cluster is named Abe and delivers 62.68 sustained teraflops, which puts it at #73. The other clusters are Lincoln, a heterogenous cluster with 192 Dell PowerEdge 1950 compute nodes (dual quad-core Intel Harpertowns per node) and 96 NVIDIA Tesla S1070 accelerators, delivering 47 teraflops; Cobalt, an SGI Altix system with 1,024 Intel Itanium 2 processors delivering 6.1 teraflops sustained; and Mercury, an IBM system with 1,774 Intel Itanium 2 processors delivering 7.22 teraflops sustained.
Abe's Top 500 Page
NCSA Hardware
NCSA Software
NCSA Allocations
NCSA Training


PSC has four HPC systems including two SGI Altix machines and an HP C3000 machine. The SGI systems are named Pople and Salk, which contain 768 and 144 cores, respectively. The HP machine has 64 cores and is named Warhol. The final system is a twenty node cluster named Codon with two 1.4Ghz AMD Opteron processors per node. None of the systems rank in the Top 500.
PSC Hardware
PSC Software
PSC Training


SDSC has several clusters that are integrated with TeraGrid. However, none of the clusters are fast enough to rank in the Top 500. The clusters include Triton, a three component resource comprising compute, data analysis, and storage systems; Dash, a 5.2 teraflop, 68-node system using two Intel Nehalem quad-core processors per node; and Bebop, a Sun X64 system using 8 quad-core processors dedicated to data analysis and mining.
SDSC Resources
SDSC Software
SDSC Allocations
SDSC Training


No Results Found