HDF5 is a data model, file format, and I/O library that became a de facto standard for HPC applications for achieving scalable I/O and storing and managing big data from computer modeling, large physics experiments and observations. This talk offers a comprehensive overview of HDF5 for anyone who works with big data in an HPC environment. The talk consists of two parts. Part I introduces the HDF5 data model and APIs for organizing data and performing I/O. Part II focuses on HDF5 advanced features such as parallel I/O and will give an overview of various parallel HDF5 tuning techniques such as collective metadata I/O, data aggregation, async, parallel compression, and other new HDF5 features that help to utilize HPC storage to its fullest potential.
M. Scot Breitenfeld received his B.S. in Aerospace Engineering from the University of Arizona and his M.S. degree in Aerospace Engineering from the University of Illinois at Urbana-Champaign. The focus of his master's thesis was modeling elastodynamic failure in bi-material interfaces using a spectral method. He has worked for the National Center for Supercomputing Applications (NCSA)on parallel structural code development and evaluation of the parallel performance of commercial finite element software.
Additionally, he worked on a DARPA-funded smart mesoflaps for an aeroelastic transpiration project parallelizing the CFD solver and parallel coupling of the fluid and structural solvers. He also worked for the Center for Simulation of Advanced Rockets (CSAR) at the University of Illinois at Urbana-Champaign. He was the principal developer of the explicit parallel 3D ALE fracture and structural/thermal finite element solver, Rocfrac, within the center's multi-physics code RocStar. He received his Ph.D. in Aerospace Engineering from the University of Illinois at Urbana-Champaign, working on modeling quasi-static fracture using Non-Ordinary State-based peridynamics. He is currently an HDF application support specialist and software engineer at The HDF Group.