Celeritas¶
The Celeritas project implements HEP detector physics on GPU accelerator hardware with the ultimate goal of supporting the massive computational requirements of LHC-HL upgrade.
- Release
*unknown version*
- Date
Jun 13, 2022
Overview¶
High Energy Physics (HEP) is entering an exciting era for potential scientific discovery. There is now overwhelming evidence that the Standard Model (SM) of particle physics is incomplete. A targeted program, as recommended by the Particle Physics Project Prioritization Panel (P5), has been designed to reveal the nature and origin of the physics Beyond Standard Model (BSM). Two of the flagship projects are the upcoming high luminosity upgrade of the High Luminosity Large Hadron Collider (HL-LHC) and its four main detectors and the Deep Underground Neutrino Experiment (DUNE) at the Sanford Underground Research Facility (SURF) and Fermi National Accelerator Laboratory (Fermilab). Only by comparing these detector results to detailed Monte Carlo (MC) simulations can new physics be discovered. The quantity of simulated MC data must be many times that of the experimental data to reduce the influence of statistical effects and to study the detector response over a very large phase space of new phenomena. Additionally, the increased complexity, granularity, and readout rate of the detectors require the most accurate, and thus most compute intensive, physics models available. However, projections of the computing capacity available in the coming decade fall far short of the estimated capacity needed to fully analyze the data from the HL-LHC. The contribution to this estimate from MC full detector simulation is based on the performance of the current state-of-the-art and LHC baseline MC application Geant4, a threaded CPU-only code whose performance has stagnated with the deceleration of clock rates and core counts in conventional processors.
General-purpose accelerators offer far higher performance per watt than Central Processing Units (CPUs). Graphics Processing Units (GPUs) are the most common such devices and have become commodity hardware at the U.S. Department of Energy (DOE) Leadership Computing Facilities (LCFs) and other institutional-scale computing clusters. However, adapting scientific codes to run effectively on GPU hardware is nontrivial and results both from core algorithmic properties of the physics and from implementation choices over the history of an existing scientific code. The high sensitivity of GPUs to memory access patterns, thread divergence, and device occupancy makes effective adaptation of MC physics algorithms especially challenging.
Our objective is to advance and mature the new GPU-optimized code Celeritas 1 to run full-fidelity MC simulations of LHC detectors. The primary goal of the Celeritas project is to maximize utilization of HEP computing facilities and the DOE LCFs to extract the ever-so-subtle signs of new physics. It aims to reduce the computational demand of the HL-LHC to meet the available supply, using the advanced architectures that will form the backbone of high performance computing (HPC) over the next decade. Enabling HEP science at the HL-LHC will require MC detector simulation that executes the latest and best physics models and achieves high performance on accelerated hardware.
- 1
This documentation is generated from Celeritas *unknown release*.
Infrastructure¶
Celeritas is built using modern CMake. It has multiple dependencies to operate as a full-featured code, but each dependency can be individually disabled as needed.
Installation¶
This project requires external dependencies to build with full functionality. However, any combination of these requirements can be omitted to enable limited development on personal machines with fewer available components.
Component |
Category |
Description |
---|---|---|
Runtime |
GPU computation |
|
Runtime |
Preprocessing physics data for a problem input |
|
Runtime |
EM physics model data |
|
Runtime |
Event input |
|
Runtime |
GPU computation |
|
Runtime |
Simple text-based I/O for diagnostics and program setup |
|
Runtime |
Shared-memory parallelism |
|
Runtime |
Input and output |
|
Runtime |
Low-level Python wrappers |
|
Runtime |
On-device navigation of GDML-defined detector geometry |
|
Docs |
Generating code documentation inside user docs |
|
Docs |
Code documentation |
|
Docs |
User documentation |
|
Docs |
Reference generation for user documentation |
|
Development |
Code formatting enforcement |
|
Development |
Build system |
|
Development |
Repository management |
|
Development |
Test harness |
Downstream usage as a library¶
The Celeritas library is most easily used when your downstream app is built with CMake. It should require a single line to initialize:
find_package(Celeritas REQUIRED CONFIG)
and if VecGeom or CUDA are disabled a single line to link:
target_link_libraries(mycode PUBLIC Celeritas::Core)
Because of complexities involving CUDA Relocatable Device Code, linking when
using both CUDA and VecGeom requires an additional include and the replacement
of target_link_libraries
with a customized version:
include(CeleritasLibrary)
celeritas_target_link_libraries(mycode PUBLIC Celeritas::Core)
Developing¶
See the Development Guide section for additional development guidelines.
API documentation¶
Note
The breathe extension was not used when building this version of the documentation. The API documentation will not be rendered below.
The Celeritas codebase lives under the src/
directory and is divided into
three packages. Additional top-level files provide access to version and
configuration attributes.
Core package¶
The corecel
directory contains functionality shared by Celeritas and ORANGE
primarily pertaining to GPU abstractions.
Fundamentals¶
System¶
Containers¶
Math, numerics, and algorithms¶
I/O¶
ORANGE¶
The ORANGE (Oak Ridge Advanced Nested Geometry Engine) package is currently under development as the version in SCALE is ported to GPU.
Celeritas¶
Problem definition¶
Transport interface¶
On-device access¶
Examples¶
TODO: example applications using Celeritas.
References¶
Note
The sphinxbib extension was not used when building this version of the documentation. References will not be generated.
Acknowledgments¶
This research was supported by the Exascale Computing Project (17-SC-20-SC), a joint project of the U.S. Department of Energy’s Office of Science and National Nuclear Security Administration, responsible for delivering a capable exascale ecosystem, including software, applications, and hardware technology, to support the nation’s exascale computing imperative.
This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
Development Guide¶
TODO: update guide <https://github.com/celeritas-project/celeritas/wiki/Development> and inline here.
License¶
Celeritas is copyrighted and licensed under the following terms and conditions.
Code¶
Intellectual Property Notice¶
Celeritas is licensed under the Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0) or the MIT license, (LICENSE-MIT or http://opensource.org/licenses/MIT), at your option.
Copyrights and patents in the Celeritas project are retained by contributors. No copyright assignment is required to contribute to Celeritas.
SPDX usage¶
Individual files contain SPDX tags instead of the full license text. This enables machine processing of license information based on the SPDX License Identifiers that are available here: https://spdx.org/licenses/
Files that are dual-licensed as Apache-2.0 OR MIT contain the following text in the license header:
SPDX-License-Identifier: (Apache-2.0 OR MIT)
Documentation¶
Intellectual Property Notice¶
Celeritas documentation is licensed under the Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0/legalcode).
Copyrights and patents in the Celeritas project are retained by contributors. No copyright assignment is required to contribute to Celeritas.
SPDX usage¶
Individual files contain SPDX tags instead of the full license text. This enables machine processing of license information based on the SPDX License Identifiers that are available here: https://spdx.org/licenses/
Files that are licensed under CC 4.0 with attribution contain the following text in the license header:
SPDX-License-Identifier: CC-BY-4.0
Additional licenses¶
Small portions of Celeritas are derived from other open source projects.