The ALCF’s data storage system is used to retain the data generated by simulations and visualizations. Disk storage provides intermediate-term storage for active projects, offering a means to access, analyze, and share simulation results. Tape storage is used to archive data from completed projects.
Mira Disk Storage
The ALCF has two GPFS file systems for bulk data storage:
- mira-fs0: resides on 16 DDN SFA12Ke storage arrays that each contain 560 3TB SATA hard drives for a total of 8,960 disk drives with a total capacity of 26.8 PB raw storage (approximately 19.2PB usable) and a maximum aggregate transfer speed of 240 GB/s as measured by IOR
- mira-fs1: resides on 6 DDN SFA12Ke storage arrays that each contain 560 3TB SATA hard drives for a total of 3,360 disk drives with a total capacity of 10.1PB raw storage (approximately 7.2PB usable) and a maximum aggregate transfer speed of 90 GB/s as measured by IOR
Mira, Cetus, and Cooley, all mount these file systems avoiding the need to copy files from one file system to another for different purposes.
Projects are mapped to the file system that best matches their storage and performance needs.
Mira, Cetus, and Vesta are being decommissioned. For additional details and instructions on how to transfer data, please visit: https://www.alcf.anl.gov/support-center/miracetusvesta/decommissioning-mira
Theta Disk Storage
The ALCF has a Lustre file system and a GPFS file system for bulk data storage:
- theta-fs0: Is a Lustre file system that resides on a Sonexion 3000 storage array and has a usable capacity of 9.2PB with an aggregate transfer rate of 210-240GB/s
- theta-fs1: Is a GPFS file system that resides on an Elastic Storage System (ESS) cluster and has a usable capacity of 7.9PB with an aggregate transfer rate of 400GB/s
ALCF computing resources share three 10,000-slot libraries using LTO6 and LTO8 tape technology. The LTO tape drives have built-in hardware compression with compression ratios typically between 1.25:1 and 2:1, depending on the data, giving an effective capacity of ~65PB.
Networking is the fabric that ties all of the ALCF’s computing systems together. The Blue Gene/Q systems have an internal proprietary network for communicating between nodes. InfiniBand enables communication between the I/O nodes and the storage system. Ethernet is used for external user access, and for maintenance and management of the systems.
The ALCF’s Blue Gene/Q systems connect to other research institutions using a total of 100 Gb/s of public network connectivity. Scientists can transfer datasets to and from other institutions over fast research networks such as the Energy Science Network (ESnet) and Internet2.