In the News

Mar 9, 2016

Paving the Way for Theta and Aurora


Last year, when Intel and the U.S. Department of Energy announced the $200 million supercomputing investment at Argonne National Laboratory, work was already underway at the Argonne Leadership Computing Facility to prepare key applications for the next-generation supercomputers.

Feb 24, 2016

CAVE2 Virtual Environment

National Science Foundation

The CAVE2" system is a next-generation, large-scale, virtual environment--a room in which images are seamlessly displayed so as to immerse an observer in a cyber world of 3-D data. Here, Khairi Reda, a research assistant at the University of Illinois at Chicago's Electronic Visualization...

Feb 12, 2016

Budget Request Reveals New Elements of US Exascale Program


A drill down into the FY2017 budget released by the Obama administration on Tuesday brings to light important information about the United States’ exascale program. As we reported in earlier coverage of the...

Feb 10, 2016

Video: Theta & Aurora – Big Systems for Big Science


In this video, Susan Coghlan from the Argonne Leadership Computing Facility provides a sneak peek at its upcoming Intel Xeon Phi coprocessor-based supercomputers.

Feb 3, 2016

Building Better Public/Private HPC Partnerships Is Focus of Pending NCSA Study

Enterprise Tech

The most powerful computer systems in the world reside in the public sector – principally, at federally funded supercomputer centers. Yet some of the most demanding workload requirements reside in the private sector. Bridging the gap between the two – finding better ways to get advanced scale...

Jan 29, 2016

Apply now to spend the summer supercomputing

Scientific Computing World

The summer of 2016 will see a raft of summer schools and other initiatives to train more people in high-performance computing, including efforts to increase the diversity of HPC specialists with a specific program aimed at ethnic minorities. But interested students need to get their applications...

Jan 26, 2016

Scalability of turbulent flow simulations on large-scale computers

Argonne Mathematics and Computer Science Division

A fundamental principle of parallel computing is that by subdividing a computation across P processors, one can realize a P-fold reduction in time to solution. For example, a simulation that uses a billion particles or gridpoints can be distributed across two compute nodes and run in half the...

Jan 21, 2016

Mira Supercomputer Simulations Give New “Edge” to Fusion Research

Scientific Computing

Using Mira, physicists from Princeton Plasma Physics Laboratory uncovered a new understanding about electron behavior in edge plasma. Based on this discovery, improvements were made to a well-known analytical formula that could enhance predictions of and, ultimately, increase fusion power...

Jan 18, 2016

Exploring the dark universe with supercomputers


Scientists use more than telescopes to search for clues about the nature of dark energy. Increasingly, dark energy research is taking place not only at mountaintop observatories with panoramic views but also in the chilly, humming rooms that house state-of-the-art supercomputers. Argonne...

Jan 14, 2016

Argonne Paves the Way for Future Systems


Last April, the third and final piece of the CORAL acquisition program clicked into place when the U.S. Department of Energy signed a $200 million supercomputing contract with Intel to supply Argonne National Laboratory with two next-generation Cray supercomputers: an 8.5-petaflop “Theta” system...