2024 Outstanding Achievement Award Winners

ATLAS Year

The ATLAS Outstanding Achievement Award winners for 2024 are:

  • Luca Canali (CERN) for outstanding contributions to the ATLAS database infrastructure.

  • Jackson Barr (University College London), Alexander Froch (Albert-Ludwigs-Universität Freiburg), Philipp Gadow (CERN), Dan Guest (Humboldt University Berlin), Nilotpal Kakati (Weizmann Institute of Science), Dmitrii Kobylianskii (Weizmann Institute of Science), Nikita Ivvan Pond (University College London), Samuel Van Stroud (University College London) for outstanding contributions to heavy flavour tagging algorithms based on Graph Neural Networks.

  • Liang Guan (Michigan), Ioannis Mesolongitis (University of West Attica), Michelle Solis (University of Arizona), Aaron White (Harvard University) for understanding the problem of randomly dropping e-links in the NSW and for finding a very effective mitigation for this problem.

  • Jakub Kremer (DESY), Agnieszka Ogrodnik (Charles University), Martin Rybar (Charles University) for outstanding contributions to the Heavy Ions operation and trigger.

  • Koji Nakamura (KEK), Hideyuki Oide (KEK), Manabu Togawa (KEK) for outstanding contributions to the ITk Pixel project in sensor production, hybridisation and module assembly.

  • Johannes Junggeburth (University of Massachusetts Amherst), Patrick Scholer (Carleton University) for outstanding contributions to the Run 3 muon software.

  • Sara Alderweireldt (University of Edinburgh), Rafal Bielski (University of Oregon), Francesco Giuli (CERN), Ralf Gugel (University of Mainz), Claudia Merlassino (University of Udine), Stefanie Morgenstern (CERN), Gabriel Palacino (Indiana University), Aleksandra Poreba (CERN), Antonia Strubig (Stockholm University), Daniele Zanzi (Albert-Ludwigs-Universität Freiburg) for outstanding contributions to the Trigger operation.

  • Julien Maurer (Bucharest IFIN-HH) for outstanding contributions to the ATLAS prompt reconstruction operation.

  • Anthony Affolder (UC Santa Cruz), Ian Dyckes (Berkeley LBNL), Vitaliy Fadeyev (UC Santa Cruz), Cole Helling (University of British Columbia, Vancouver), Jacob Wayne Johnson (UC Santa Cruz), Matthew Kurth (Beijing IHEP), Masahiro Morii (Harvard University), Peter Phillips (Rutherford Appleton Laboratory), Luise Poley (TRIUMF), Craig Sawyer (Rutherford Appleton Laboratory) for outstanding contributions to the identification of the vibrational source of cold noise on ITk Strip modules.

See the News Article on the 2024 Awards

Collaborating with ATLAS

How to collaborate with ATLAS

There are two ways for non-ATLAS scientists to collaborate with ATLAS members. These methods allow for the sharing of ATLAS data and/or unofficial results with specified individuals outside of the collaboration.

  1. Short-term association (STA): Non-ATLAS scientists benefiting from this status participate in the analysis work, potentially including access to ATLAS collision and Monte Carlo simulated data. The work typically leads to a public result in form of a paper or public note. Depending on the type and amount of work done by the non-ATLAS scientists, the contribution is acknowledged by a reference, an acknowledgement, or co-authorship. Short-term association is proposed by the Physics Coordinator and the Spokesperson, and requires endorsement by the ATLAS Executive Board.
  2. Interactions with authors of Monte Carlo generators and related tools (MCI): In specific cases it may be needed to discuss internal plots and results using collision and/or simulated data with authors of MC generators or of related phenomenological tools. They may be involved in discussions of results not yet approved for presentation outside ATLAS but approved for this specific purpose by the Physics Coordinator. These persons do not become authors of ATLAS papers and they have no access to ATLAS data for analysis or access to ATLAS internal web pages. If they have made significant contributions, they may be acknowledged in one or more paper(s), or may become author of PUB notes. Such interactions are managed by the Physics Coordinator. No ATLAS Executive Board endorsement is required.

Learn more

Short-Term Association with ATLAS (STA)

ATLAS members may wish to collaborate with individuals or groups that are not members of the ATLAS Collaboration on specific ATLAS-related scientific or technical topics, potentially requiring access to ATLAS collision and/or simulated data by the non-ATLAS scientist. These collaborations will typically lead to a dedicated ATLAS paper or public note. Examples include:

  • Theorists wishing to collaborate on a dedicated physics study.
  • Computer scientists wishing to collaborate on a technical study, for instance, related to machine learning.
  • LHC accelerator physicists collaborating to combine machine and ATLAS measurements to establish a precise estimate of beam properties.

Guidelines

Short-term associations shall normally be limited to cases where an external person or group brings specific expertise or new research ideas to ATLAS.

The decision on whether a formal short-term association should be undertaken rests with the ATLAS Executive Board, acting on a recommendation from the ATLAS Spokesperson. The Spokesperson is expected to establish that there is support for the recommendation within the ATLAS Collaboration by the following procedure:

  1. A written proposal shall be prepared by the proponents explaining the expertise of the STA candidate(s), whether direct access to ATLAS data is required, and what the benefit is for the ATLAS Collaboration.
  2. The proposal shall be discussed and agreed upon with the corresponding group convener, Physics Coordinator, and Spokesperson.
  3. If Step 2 is passed, the ATLAS Spokesperson shall circulate the proposal to the Executive Board for endorsement.
  4. The Spokesperson reports newly approved short-term associations to the Collaboration Board at its next meeting.

An approved short-term association is only valid for the work described in the written proposal. Any new project involving the same person or group, or a significant amendment of the original project, requires a new short-term association following the above procedure. A short-term association ends with the completion of the work.

Depending on the type and amount of work done by the non-ATLAS scientist, the contribution is acknowledged by a reference, an acknowledgement, or co-authorship of the document(s) the non-ATLAS scientist contributed to.

The Physics Coordinator regularly updates the Executive Board about the status of ongoing and recently concluded short-term associations.

Interactions with authors of Monte Carlo event generators and related tools (MCI)

As part of regular ATLAS Monte Carlo (MC) production, commissioning of new event generator setups, or during a specific physics analysis, extensive comparisons of ATLAS data with the predictions of MC event generators are performed, and generator parameters are tuned to data. Interactions with the authors of MC event generators or related phenomenological tools (referred to as "MC authors" below) can be useful to speed up the process of understanding the data and the generator modeling. This may also require a prolonged collaboration between authors of a specific generator and ATLAS collaborators. In this context, it can be important to discuss internal plots and results on collision and/or simulated data with MC authors. The procedure described here addresses such cases.

Guidelines

A better understanding of some processes being measured by ATLAS may benefit from early interactions (i.e., before results are published) with certain MC authors; such discussions may entail the need to show ATLAS-internal results. Physics groups can request permission from the Physics Coordinator to do so with selected MC authors; the decision to allow this special status should be made in consultation with the Physics Modeling Group (PMG). Such authorization for interactions with MC authors is granted for a period of up to two years, which may be renewed as necessary. The PMG convenors and relevant subgroup conveners should be informed regularly about the progress of the interaction.

Preliminary generator-level results produced with the ATLAS framework and/or ATLAS simulated data can be discussed confidentially with the MC authors. The plots and results discussed with the MC authors must have been endorsed by the Physics Coordinator if they contain unpublished ATLAS data, to ensure that the results are solid and stable enough, and that they are not of a sensitive nature. In such cases, the MC authors with whom discussions about internal material are undertaken are formally asked (through an email from the Physics Coordinator or a delegate) to maintain confidentiality about these discussions. In no case will direct access to ATLAS data be given.

The Physics Coordinator maintains a record of ongoing MCI projects. That record is made available to the collaboration.

MC authors interacting with ATLAS may be acknowledged in the relevant paper(s) if they have made significant contributions. Exceptional authorship may be granted for PUB notes. If, along the course of the interaction, it becomes clear that exceptional authorship will be requested for a CONF note or a paper, a Short-Term Association (STA) should be requested.

Interested in collaborating?

Family Name
Aleksa
Given Name
Martin

Photo
Martin Aleksa

2023 Thesis Award Winners

ATLAS Year
ATLAS Award

ATLAS / Intel® Case Study

Addressing the challenge of data processing at HL-LHC

The ATLAS Experiment Implements Heterogeneous Particle Reconstruction with Intel® oneAPI Tools

Using Intel® oneAPI tools, the ATLAS Experiment at the Large Hadron Collider is on track to achieve many-fold performance improvements using multi-architecture CPU+GPU systems in processing future data from the detector.

The ATLAS Experiment

The ATLAS Experiment is a general purpose particle physics experiment at the Large Hadron Collider (LHC). Its goal is to understand the nature of the smallest building blocks of matter in our universe by studying collisions of protons and heavy nuclei at the high-energy frontier. The high-energy collisions also recreate conditions that would have existed mere moments after the Big Bang.

Technology,Physics,Diagrams
A simplified view of the components of the ATLAS Detector. (Image: ATLAS Collaboration/CERN)

Track Reconstruction in Particle Physics

Charged particle reconstruction is one of the most computationally challenging steps in analysing the data recorded by the ATLAS Detector. It is the process of identifying groups of measurements in different parts of the detector that came from energy deposits left by a single particle, then calculating the physical properties of the particles based on the paths they took through the detector, interacting with the detector’s material, while traversing a non-homogeneous magnetic field. Tracking is a complex combinatorial task, described in more detail in A Common Tracking Software (Acts) project, assigning in some cases hundreds of thousands of measurements to thousands of particles. The challenge will increase even further in the coming years, with the High-Luminosity LHC era providing large increases in the proton-proton collision rate, allowing the total dataset to be increased by an order of magnitude.

The High-Luminosity LHC Challenge

The current algorithms used to analyse ATLAS’s data do not scale well to the data expected to arrive in the future. Without significant improvements in the experiment’s data analysis software, execution times would become unacceptably high.

Technology,Computing,ATLAS
ATLAS data processing time as a function of proton-proton collisions per event (Image: ATLAS Collaboration/CERN)

This is demonstrated by the plot shown on the right. It shows the time taken by the ATLAS Experiment’s existing algorithms for reconstructing events with 20-90 proton-proton collisions per proton-bunch crossing. The LHC currently provides events with ~60 simultaneous proton-proton collisions to ATLAS. These require tens of seconds to reconstruct on a single CPU thread with current reconstruction algorithms. During the HL-LHC era, up to 200 proton-proton collisions are expected per LHC event. With the algorithmic approach used so far, it would require more than 10 times more processing power than currently in use to process each of those events.

In order to achieve the ATLAS experiment’s physics programme in a sustainable way, significant changes and improvements to its data processing are under study.

oneAPI multiarchitecture programming using a native C++ API that fits well into ATLAS’s software framework, is one of the most advantageous methods for making use of hardware accelerators like GPUs in ATLAS’s software.

Technology,Computing,ATLAS

The Acts Common Tracking Software (ACTS) Project

After decades of nuclear- and particle physics experiments writing their track reconstruction software individually, with minimal cooperation, the ACTS project now aims to provide a general toolkit that experiments could use as the basis of their own reconstruction software. The project’s main development, for the time being, is focusing on implementing tools for use on CPUs.

The ACTS Parallelization R&D

To foster an independent development environment to try new ideas in, multiple R&D projects were started in 2020 with the aim of implementing the same functionality provided by ACTS, running on accelerators / GPUs. Currently developed in multiple separate software repositories, the goal of this effort is to migrate its results back into the main ACTS project. This would make GPU-accelerated charged-particle reconstruction accessible to current, and future nuclear- and particle-physics experiments around the world.

The following development projects were set up as part of the ACTS Parallelization R&D:

  • acts-project/vecmem: Provides infrastructure for allocating and managing memory using standard library containers and equivalents in both host and device code.
  • acts-project/algebra-plugins: Provides an abstraction for performing the linear algebra operations on small vectors and matrices that are required during track reconstruction. Allows a seamless switch between different linear algebra backends such as Eigen, SMatrix, and hand-written implementations.
  • acts-project/covfie: Provides a general way of storing and accessing a “vector field” in host and device code. Used for the storage of magnetic fields in the track reconstruction software.
  • acts-project/detray: Provides a compile-time polymorphic detector geometry description. This code is responsible for much of the logic needed for propagating and fitting particle tracks through/in a detector.
  • acts-project/traccc: Implements the high level algorithms for performing track reconstruction using CPUs and GPUs. This is the “primary project” of the R&D effort, which brings together all other projects into a single build of experimental libraries and executables.

During code development some useful features of oneAPI’s multiarchitecture programming via SYCL were discovered.

  • Expressing asynchronous code execution using SYCL is done in a very natural way, as the API very much encourages expressing all operations as interdependent tasks. In many cases, extra effort was spent on achieving the same level of asynchronicity using CUDA code.
  • The oneAPI compiler optimises accelerated code blocks very efficiently. This led in a number of cases to binaries compiled for the NVIDIA backend that run even faster than the corresponding binaries produced from native CUDA code.

A further general observation of the ACTS Parallelization R&D work has been that porting algorithms implemented in one GPU language to another one is fundamentally a much easier task than porting algorithms optimised for running in a single thread on a CPU, to running efficiently on a GPU. Some algorithms, first implemented using CUDA, could be fairly easily and quickly modified to work with SYCL to create portable code during the code development.

Technology,Computing,ATLAS

The Status of ACTS Track Reconstruction on GPUs

Reconstructing the tracks of charged particles in a detector like ATLAS happens in multiple steps. After establishing the 3D positions in the detector where (charged) particles interacted with detector elements, tracks are reconstructed by first generating “seeds” of viable 3D position triplets, extending those seeds with a Combinatorial Kalman Filter approach, and finally performing a combined fit of the entire track. Here you can find a more detailed description of this process in ACTS, which is also shown at right.

Technology,Computing,ATLAS
Implementation progress within the ACTS Parallelization R&D Project. (Image: ATLAS Collaboration/CERN)

The R&D project is nearly feature complete at the time of writing, as shown in the above table. Already at this point the project has proven the feasibility of implementing track reconstruction with significant code sharing between a classical CPU implementation and one designed specifically for efficient GPU acceleration. We used the following tools from the Intel® oneAPI Base Toolkit during the development:

  • The Intel® oneAPI DPC++/C++ Compiler is used for building all C++ source files of the project, with appropriate flags for building some of the source files as SYCL sources.
  • The oneapi-gdb debugger was used many times during the development to understand our code, and to validate the implementation of our algorithms.
  • The Intel® VTune™ Profiler continues being very effective for understanding performance bottlenecks in both the host/CPU and device/GPU part of our codebase. We are using it extensively during code development to understand which part of the code to focus on with our optimizations.
  • Finally the oneAPI Threading Building Blocks (oneTBB) task-based multithreading library was used to implement host-side multithreading in our applications. oneTBB will eventually allow us to conveniently integrate the Acts GPU code into ATLAS’s full offline software, which is also based on oneTBB.

Performance Results

Based on early performance results we believe GPU-based track reconstruction will be a viable path for ATLAS in the High-Luminosity LHC era. In tests with an early version of the Acts GPU R&D code running the already existing algorithm chain up to estimating the parameters of track seeds, Intel’s data centre GPUs offer performance competitive with the offerings from NVIDIA, as shown below. With the performance of SYCL code compiled for an NVIDIA backend being very close to native CUDA code executing the same algorithm. It is also worth noting that single data centre GPUs provide a significantly higher performance with the traccc code than would be possible using even multiple traditional CPUs.

Technology,Computing,ATLAS
Track Parameter Estimation Performance (Image: ATLAS Collaboration/CERN). See configuration information.

The Future of the ACTS Parallelization R&D

Technology,Computing,ATLAS
Schedule for the High-Luminosity LHC, which begins in 2029 with Run 4 of the LHC. (Image: CERN)

Once the traccc project becomes feature complete, after a review of the lessons learned from the R&D process, the code will be migrated back into the main ACTS codebase.

With the start of the High-Luminosity LHC era currently planned for 2029 (see the currently planned schedule above), ATLAS is planning to make a final decision about the hardware/software that it will use for its track reconstruction as part of its data taking in 2025–6. The oneAPI-aided implementation is on good track to prove its viability – providing code portability with performance and flexible hardware choice, and to possibly be selected as the experiment’s solution for handling the huge data load coming from colliding particle beams with a higher intensity than was ever possible before.


The work described here was performed as a part of a collaboration between the ACTS Parallelization R&D team (mostly composed of members of the ATLAS Heterogeneous Computing & Accelerator Forum), led by Attila Krasznahorkay, and the Intel oneAPI team.


Track Parameter Estimation Performance Information

Testing date:
Results are based on testing by the research team working on this study at Intel as of August 22, 2023.
Configuration data:
Intel Data Center GPU Max Series: 1-node, 2x Intel Xeon Platinum 8480+, 56 cores, HT On, Turbo On, NUMA 2, Total Memory 1024GB (16x64GB DDR5 4800 MT/s [4800 MT/s]), BIOS SE5C7411.86B.9525.D26.2305160804, , 1x Ethernet Controller X710 for 10GBASE-T, 1x 960 GB
Micron 7450 MTFDKBG960TFR, Ubuntu 22.04.2 LTS, 5.15.47+prerelease23.6.22, microcode 0x2b0001b0, 4x Intel Data Center GPU Max 1550, agama driver: agama-ci-devel-682.16, AMC Firmware Version: 6.6.0.0
Compilers/Tools used: Intel® oneAPI DPC++/C++ Compiler 2023.2.1
Compiler flags used: “-O2 -fsycl -fsycl-targets=intel_gpu_pvc -Xsycl-target-backend '-options -ze-intel-enable-auto-large-GRF-mode'”
Tested by The Atlas Experiment at CERN on 22/08/23.

NVIDIA A100: 1-node, 2x Intel Xeon Platinum 8480+, 56 cores, HT On, Turbo On, NUMA 2, Total Memory 512GB (16x32GB DDR5 4800 MT/s [4800 MT/s]), BIOS SE5C7411.86B.9525.D26.2305160804, 1x Ethernet Controller X710 for 10GBASE-T, 1x 1 TB Intel SSD PE2KX010T8, Ubuntu 22.04 LTS, 5.15.0-79-generic, microcode 0x2b0004b1, NVIDIA A100 80GB PCIe GPU, Driver Version: 535.54.03, CUDA Version: 12.0
Compilers/Tools used: clang version 17.0.0 (https://github.com/intel/llvm aa5722c9b25b79c70756c77cbe8393ad524f6e5e)
Compiler flags used: “-fsycl -fsycl-targets=nvidia_gpu_sm_80”
Tested by The Atlas Experiment at CERN on 22/08/23.

Open Virtual Visit in English

Submitted by Steven Goldfarb on

A special Virtual Visit offered during a temporary LHC shutdown in the summer of 2023. We'll answer questions about the shutdown and the large variety of recent results presented at the summer conferences. Oh yes. And you'll get to see our beautiful detector.