computing

ATLAS releases 65 TB of open data for research

The ATLAS Experiment at CERN has made two years’ worth of scientific data available to the public for research purposes. The data include recordings of proton–proton collisions from the Large Hadron Collider (LHC) at a collision energy of 13 TeV. This is the first time that ATLAS has released data on this scale, and it marks a significant milestone in terms of public access and utilisation of LHC data.

1 July 2024

Learning by machines, for machines: Artificial Intelligence in the world's largest particle detector

Julia Gonski explains the long-established use of artificial intelligence and machine learning (AI/ML) in high-energy physics research and explores the exciting potential these technologies hold for the field.

5 June 2024

Evolving ATLAS conditions data architecture for LHC Runs 3 and 4

For Run 3 of the LHC (2022–ongoing), the ATLAS Collaboration decided to change how it stores and processes conditions data. The significant efforts that went into this change – and the motivations for them – were presented at the 26th International Conference on Computing in High Energy and Nuclear Physics in May.

22 June 2023

ATLAS and Seal Storage Technology collaborate on new archival storage

The ATLAS Collaboration has partnered with Seal Storage Technology in a pilot project to explore their decentralised cloud storage platform as an efficient and cost-effective option for archival data storage

28 October 2022

Harnessing a supercomputer for ATLAS

ATLAS researchers are exploring the potential of High Performance Computing (HPC). HPC harnesses the power of purpose-built supercomputers constructed from specialised hardware, and is used widely in other scientific disciplines.

2 June 2022

ATLAS Live talk: Artificial Intelligence, Machine Learning and the Higgs boson with Dr. David Rousseau

On 31 March 2022 at 8pm CEST, Dr. David Rousseau will give a live public talk on the ATLAS Youtube Channel on the role artificial intelligence plays in particle physics research.

23 March 2022

ATLAS event selection system readies for LHC Run 3

The ATLAS trigger system operated extremely successfully during Run 1 (2009–2013) and Run 2 (2015–2018) of the LHC. It is now undergoing various upgrades in preparation for the upcoming Run-3 data-taking period, which will see a moderate increase in the rate of collisions inside the experiment.

28 February 2022

ATLAS Live talk: Building the Data Haystack with Dr Heather Russell

On 22 November 2021 at 8pm CET, Dr. Heather Russell will give a live public talk on the ATLAS Youtube Channel on the "trigger", the ATLAS event selection system.

18 November 2021

Teaching established software new tricks

Following several years of development, ATLAS Collaboration has launched a new "multithreaded" release of its analysis software, Athena.

15 October 2021

Bringing new life to ATLAS data

The ATLAS Collaboration is breathing new life into its LHC Run-2 dataset, recorded from 2015 to 2018. Physicists will be reprocessing the entire dataset – nearly 18 PB of collision data – using an updated version of the ATLAS offline analysis software (Athena). Not only will this improve ATLAS physics measurements and searches, it will also position the Collaboration well for the upcoming challenges of Run 3 and beyond.

15 October 2021

ATLAS Live talk: From Data to Discovery with Dr. James Catmore

Making a scientific breakthrough in 2021 requires more than just a microscope – most scientists rely on powerful computers and ingenious software to carry out their research. In this live talk, Dr. James Catmore explains the advanced computing and software techniques used by the ATLAS Experiment.

10 May 2021

ATLAS releases new open software

The ATLAS Collaboration has just released a collection of 200 software packages that make up the Trigger and Data Acquisition System (TDAQ). With this new release, most ATLAS software is now open – reinforcing the Collaboration’s ongoing commitment to open science.

20 November 2020

African scientists take on new ATLAS machine-learning challenge

Cirta is a new machine-learning challenge for high-energy physics on Zindi, the Africa-based data-science challenge platform. Launched this autumn at the International Conference on High Energy and Astroparticle Physics (TIC-HEAP), Constantine, Algeria, Cirta challenges participants to provide machine-learning solutions for identifying particles in LHC experiment data.

20 November 2019

The trouble with terabytes

2016 has been a record-breaking year. The LHC surpassed its design luminosity and produced stable beams a staggering 60% of the time – up from 40% in previous years, and even surpassing the hoped for 50% threshold. While all of the ATLAS experiment rejoiced – eager to analyse the vast outpouring of data from the experiment – its computing experts had their work cut out for them.

14 December 2016

Higgs over easy

My colleagues and I are in town to attend the 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016, for short). I like to think of us as the nerds of the nerds. Computing, networking, software, middleware, bandwidth, and processors are the topics of discussion, and there is indeed much to talk about.

12 October 2016
12 October 2016

ATLAS High Performance Computing Initiative Wins Award

The Chinese Academy of Sciences (CAS) has awarded members of the ATLAS computing community first prize for their novel use of supercomputer infrastructure.

11 December 2015

Behind very great results lies great computing

At the ATLAS experiment, masterful computing infrastructure is transforming raw data from the detector into particles for analysis, with a set direction, energy and type.

13 November 2015

ATLAS awards Long Shutdown 1 achievements

The ATLAS Outstanding Achievement Awards 2015 were presented on 18 June to 26 physicists and engineers, in 11 groups, for their excellent work carried out during Long Shutdown 1 (LS1).

25 June 2015

Defending Your Life (Part 3)

This is the last part of my attempt to explain our simulation software. You can read Part 1, about event generators, and Part 2, about detector simulation, if you want to catch up. Just as a reminder, we’re trying to help our theorist friend by searching for his proposed “meons” in our data.

28 October 2014

Defending Your Life (Part 2)

I’ve been working on our simulation software for a long time, and I’m often asked “what on earth is that?” This is my attempt to help you love simulation as much as I do.

20 October 2014

Defending Your Life (Part 1)

Having spent many hours working on the simulation software in ATLAS, I thought this would be a good place to explain what on earth that is (H/T to Al Brooks for the title). Our experiment wouldn’t run without the simulation, and yet there are few people who really understand it.

5 October 2014

Letters from the Road

I've been lucky to get to make two workshop / conference stops on a trip that started at the very beginning of October. The first was at Kinematic Variables for New Physics, hosted at Caltech. Now I'm up at the Computing in High Energy Physics conference in Amsterdam. Going to conferences and workshops is a big part of what we do, in order to explain our work to others and share what great things we're doing, in order to hear the latest on other people's work, and - and this one is important - in order to get to talk with colleagues about what we should do next.

18 October 2013

Needle in a haystack

The LHC is designed to collide bunches of protons every 25 ns, i.e., at a 40 MHz rate (40 million/second). In each of these collisions, something happens. Since there is no way we can collect data at this rate, we try to pick only the interesting events, which occur very infrequently; however, this is easier said than done. Experiments like ATLAS employ a very sophisticated filtering system to keep only those events that we are interested in. This is called the trigger system, and it works because the interesting events have unique signatures that can be used to distinguish them from the uninteresting ones.

16 March 2012

From 0-60 in 10 million seconds! – Part 2

This is continuing from the previous post, where I discussed how we convert data collected by ATLAS into usable objects. Here I explain the steps to get a Physics result. I can now use our data sample to prove/disprove the predictions of Supersymmetry (SUSY), string theory or what have you. What steps do I follow?

19 February 2012

From 0-60 in 10 million seconds! – Part 1

OK, so I’ll try to give a flavour of how the data that we collect gets turned into a published result. As the title indicates, it takes a while! The post got very long, so I have split it in two parts. The first will talk about reconstructing data, and the second will explain the analysis stage.

17 February 2012

7 or 8 TeV, a thousand terabyte question!

A very happy new year to the readers of this blog. As we start 2012, hoping to finally find the elusive Higgs boson and other signatures of new physics, an important question needs to be answered first - are we going to have collisions at a center of mass energy of 7 or 8 TeV?

11 February 2012

Top down: Reflections on a long and sleepless analysis journey

For the last months (which feel like years…) I’ve been working, within a small group of people, on the precision measurement of the top quark pair production cross section, and if you think that sounds complicated – the German word is “Top-Quark-Paarproduktionswechselwirkungsquerschnitt”.

21 August 2011

Dress Rehearsal for ATLAS debut

Dave Charlton and his team have a mammoth job on their hands; Charlton has been tasked with coordinating the Full Dress Rehearsal (FDR) of the computing and data analysis processes of the ATLAS experiment, a run–through which he describes as "essential, almost as much as ensuring the detector itself actually works".

15 December 2007

ATLAS copies its first PetaByte out of CERN

On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking.

1 November 2006