The ALCF’s data storage system is used to retain the data generated by simulations and visualizations. Disk storage provides intermediate-term storage for active projects, offering a means to access, analyze, and share simulation results. Tape storage is used to archive data from completed projects.
The ALCF has Lustre file systems and GPFS file systems for data storage:
- Grand: Is an HPE ClusterStor E1000 with100 Petabytes of usable capacity and 8480 disk drives. The network is HDR Infiniband. It is a Lustre filesystem with 160 Object Storage Targets and 40 Metadata Targets. The sustained data transfer rate is 650GB/s. Primary use is Compute campaign storage. Also see Theta Disk Quota and Data Policy.
- Eagle: Is an HPE ClusterStor E1000 with 100 Petabytes of usable capacity and 8480 disk drives. The network is HDR Infiniband. It is a Lustre filesystem with 160 Object Storage Targets and 40 Metadata Targets. The sustained data transfer rate is 650GB/s. Primary use is for data sharing with the research community, using Globus. Also see Theta Disk Quota and Data Policy.
- theta-fs0: Is a Lustre file system that resides on a Sonexion 3000 storage array and has a usable capacity of 9.2PB with an aggregate transfer rate of 210-240GB/s. Also see Theta Disk Quota and Data Policy.
- theta-fs1: Is a GPFS file system that resides on an Elastic Storage System (ESS) cluster and has a usable capacity of 7.9PB with an aggregate transfer rate of 400GB/s
- mira-fs0: resides on 16 DDN SFA12Ke storage arrays that each contain 560 3TB SATA hard drives for a total of 8,960 disk drives with a total capacity of 26.8 PB raw storage (approximately 19.2PB usable) and a maximum aggregate transfer speed of 240 GB/s as measured by IOR
- mira-fs1: resides on 6 DDN SFA12Ke storage arrays that each contain 560 3TB SATA hard drives for a total of 3,360 disk drives with a total capacity of 10.1PB raw storage (approximately 7.2PB usable) and a maximum aggregate transfer speed of 90 GB/s as measured by IOR
Cooley, all mount these file systems avoiding the need to copy files from one file system to another for different purposes.
Projects are mapped to the file system that best matches their storage and performance needs.
Mira, Cetus, and Vesta have been decommissioned. For additional details and instructions on how to transfer data, please visit: https://www.alcf.anl.gov/support-center/miracetusvesta/decommissioning-mira
ALCF computing resources share three 10,000-slot libraries using LTO6 and LTO8 tape technology. The LTO tape drives have built-in hardware compression with compression ratios typically between 1.25:1 and 2:1, depending on the data, giving an effective capacity of ~65PB.
Networking is the fabric that ties all of the ALCF’s computing systems together. The Blue Gene/Q systems have an internal proprietary network for communicating between nodes. InfiniBand enables communication between the I/O nodes and the storage system. Ethernet is used for external user access, and for maintenance and management of the systems.
The ALCF’s Blue Gene/Q systems connect to other research institutions using a total of 100 Gb/s of public network connectivity. Scientists can transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESnet) and Internet2.