Taking HACC into the Exascale Era: New Code Capabilities, and ChallengesSeries: HPC Best Practices Webinars
HACC (Hardware/Hybrid Accelerated Cosmology Code) is a well-established code within the US Department of Energy community, and with a long history — having run on every flagship computing system for over a decade. Often participating in early-access programs for upcoming systems, an ongoing challenge for HACC developers is to not only contend with state-of-the-art architectures, but also with their initially supported, and often novel, programming models. The increased computing power brought about by today’s exascale systems has allowed HACC to support additional baryonic physics through a newly developed Smoothed Particle Hydrodynamics (SPH) formalism called Conservative Reproducing Kernel (CRK). This webinar will discuss the challenges faced in preparing HACC for multiple exascale systems while simultaneously adding additional code capabilities, with ongoing development, all the while with a central focus on performance.
- Esteban Rangel (Argonne National Laboratory)
Esteban Rangel is a member of the HACC development team. He joined the Computational Science (CPS) division at Argonne National Laboratory as an Assistant Computational Scientist in 2021. Prior to joining CPS, he was a postdoctoral researcher at the Argonne Leadership Computing Facility (ALCF) working on porting HACC’s hydrodynamics solvers to the Aurora supercomputer. He began contributing to the HACC codebase as a Ph.D. student at Northwestern University, where much of the work towards his thesis was designing and implementing scalable analysis software for N-body cosmological simulations.