Simulating and Learning in the ATLAS Detector at the Exascale

PI Walter Hopkins, Argonne National Laboratory
Aurora ESP: Simulating and Learning in the ATLAS Detector at the Exascale

A candidate event in which a Higgs boson is produced in conjunction with top and anti-top quarks which decay to jets of particles. The challenge is to identify and reconstruct this type of event in the presence of background processes with similar signatures which are thousands of times more likely. (Image: CERN)

Project Summary

The ATLAS experiment at the Large Hadron Collider measures particles produced in proton-proton collision as if it were an extraordinarily rapid camera. These measurements led to the discovery of the Higgs boson, but hundreds of petabytes of data still remain unexamined, and the experiment’s computational needs will grow by an order of magnitude or more over the next decade. This project deploys necessary workflows and updates algorithms for exascale machines, preparing Aurora for effective use in the search for new physics.

Project Description

The Large Hadron Collider (LHC) at CERN is the highest energy particle collider in the world.

The ATLAS experiment, one of two multi-purpose detectors at the LHC, measures the particles produced in proton-proton collisions very much like a camera taking a picture at an astonishing rate of 40 million images per second, with over 150 million readout channels.

ATLAS currently uses over 2.5 billion core-hours per year to simulate, reconstruct, and analyze the images collected in these collisions. Thousands of ATLAS scientists continue to analyze several hundreds petabytes of data looking for signs of new physics in order to answer the questions that Standard Model does not address. The needs of the experiment will increase by an order of magnitude in the next ten years; the effective utilization of Aurora will be key to ensuring ATLAS continues delivering discoveries on a reasonable timescale and may enable new analyses not yet envisioned

This project aims to enable ATLAS workflows for efficient end-to-end production on Aurora, optimize ATLAS software for parallel environments, and investigate new algorithms for detector reconstruction to produce the large simulation datasets needed to study new reconstruction algorithms (such as training Deep Neural Networks to identify particle signatures in the detector or identify full event topologies). Successful algorithms would then be integrated into the ATLAS reconstruction framework.

Note: The original project PI Jimmy Proudfoot has retired.

 

Project Type
Domains
Allocations