Skip to main content

"HEP Workload Optimization For Parallel Batch Oriented HPC Systems"


EMSL Project ID
49148

Abstract

The High Energy Physics (HEP) computing requirements for current and future experiments require a considerable investment from DOE in hardware infrastructure. The growing costs are driven by the increased size of data and CPU required to process data and generate Monte Carlo (MC) samples. During the SC14 HEP demo Belle II Monte Carlo jobs successfully ran on the Cascade supercomputer located at PNNL. Although single core HEP jobs can run on existing HPC resources as “backfill”, they do not schedule optimally. We will explore 2 potential ways to improve the performance of HEP workload on HPC systems: 1.) Adapting HEP computing frameworks (e.g. Geant4) to many core-processing architectures, including current coprocessor cards 2.) Fitting HEP jobs to backfill windows on multicore and many core HPC compute nodes

Project Details

Start Date
2015-10-26
End Date
2016-09-30
Status
Closed

Team

Principal Investigator

David Cowley
Institution
Environmental Molecular Sciences Laboratory

Team Members

Jan Strube
Institution
Pacific Northwest National Laboratory

James Czebotar
Institution
Environmental Molecular Sciences Laboratory

Kenneth Schmidt
Institution
Environmental Molecular Sciences Laboratory