ALCF's Intrepid Ranked #1 on Graph 500 List

announcements

Intrepid, the IBM Blue Gene/P supercomputer housed at the Argonne Leadership Computing Facility (ALCF), was ranked #1 on the first Graph 500 list unveiled November 17 at SC10. The list ranks supercomputers based on their performance on data-intensive applications (http://www.graph500.org/Specifications.html) and thus complements the Top 500 list that is based on the LINPACK benchmark (http://www.top500.org/project/linpack).

Data-intensive supercomputer applications are increasingly important HPC workloads. Current benchmarks and performance metrics do not provide useful information on the suitability of supercomputing systems for data-intensive applications.

Backed by a steering committee of more than 30 international HPC experts from academia, industry, and national laboratories, Graph 500 establishes a new set of large-scale benchmarks for these applications. These benchmarks will guide the design of hardware architectures and software systems intended to support such applications and help procurements. Graph algorithms are a core part of many analytics workloads. 

The Graph 500 steering committee is developing comprehensive benchmarks to address three application kernels: concurrent search, optimization (single-source shortest path), and edge-oriented (maximal independent set). Furthermore, the committee is addressing five graph-related business areas: Cybersecurity, Medical Informatics, Data Enrichment, Social Networks, and Symbolic Networks. Committee members also are working with the SPEC committee to include the Graph 500 benchmark in the CPU benchmark suite. 

The Graph 500 was introduced at ISC2010 on May 30-June 3. In future years, the list is expected to rotate between the annual ISC and SC conferences.

Rank

 

 

Machine

 

 

Owner

 

 

Problem Size

 

 

TEPS

 

 

Implementation

 

 

1

DOE/SC/ANL Intrepid, IBM Blue Gene/P (

 

8,192 of 40,960 nodes/32K of 163,840 cores)

 

 

Argonne National Laboratory

 

 

Scale 36 (Medium)

 

 

6.6 GE/s

 

 

Optimized

 

 

2

 

 

Franklin (Cray XT4,

 

 

500 of 9,544 nodes)

 

 

NERSC

 

 

Scale 32 (Small)

 

 

5.22 GE/s

 

 

Optimized

 

 

3

 

 

cougarxmt (128-node Cray XMT)

 

 

Pacific Northwest National Laboratory

 

 

Scale 29 (Mini)

 

 

1.22 GE/s

 

 

Optimized

 

 

4

 

 

graphstorm (128-node Cray XMT)

 

 

Sandia National Laboratories

 

 

Scale 29 (Mini)

 

 

1.17 GE/s

 

 

Optimized

 

 

5

 

 

Endeavor (256-node, 512-core Westmere X5670 2.93, IB network)

 

 

Intel Corporation

 

 

Scale 29 (Mini)

 

 

533 ME/s

 

 

Reference

 

 

6

 

 

Erdos (64-node Cray XMT)

 

 

Oak Ridge National Laboratory

 

 

Scale 29 (Mini)

 

 

50.5 ME/s

 

 

Reference

 

 

7

 

 

Red Sky (Nehalem X5570 @2.93 GHz, IB Torus, 512 processors)

 

 

Sandia National Laboratories

 

 

Scale 28 (Toy++)

 

 

477.5 ME/s

 

 

Reference

 

 

8

 

 

Jaguar (Cray XT5-HE,

 

 

512-node subset)

 

 

Oak Ridge National Laboratory

 

 

Scale 27 (Toy+)

 

 

800 ME/s

 

 

Reference

 

 

9

 

 

Endeavor (128-node, 256-core Westmere X5670 2.93, IB network)

 

 

Intel Corporation

 

 

Scale 26 (Toy)

 

 

615.8 ME/s

 

 

Reference