Skip to main content

"HEP Workload Optimization For Parallel Batch Oriented HPC Systems"


EMSL Project ID
49671

Abstract

The High Energy Physics (HEP) computing requirements for current and future experiments require a considerable investment from DOE in hardware infrastructure. The growing costs are driven by the increased size of data and CPU required to process data and generate Monte Carlo (MC) samples. Belle II Monte Carlo jobs have successfully run on the Cascade supercomputer located at PNNL. Although single core HEP jobs can run on existing HPC resources as “backfill”, they do not schedule optimally. We will explore 2 potential ways to improve the performance of HEP workload on HPC systems: 1.) Adapting HEP computing frameworks (e.g. Geant4) to many core-processing architectures, including current coprocessor cards 2.) Fitting HEP jobs to backfill windows on multicore and many core HPC compute nodes. This work will help meet DOE Office of Science objectives to increase the use of existing ASCR resources for SC computation.

Project Details

Start Date
2016-11-01
End Date
2017-09-30
Status
Closed

Team

Principal Investigator

David Cowley
Institution
Environmental Molecular Sciences Laboratory

Team Members

Jan Strube
Institution
Pacific Northwest National Laboratory

James Czebotar
Institution
Environmental Molecular Sciences Laboratory

Kenneth Schmidt
Institution
Environmental Molecular Sciences Laboratory