In conversation with Nick Ellis, one of the architects of the ATLAS trigger

10 June 2018 | By

Portraits,Collaboration,ATLAS
Nick Ellis with the ATLAS trigger system. (Image: K. Anthony/ATLAS Collaboration)

A long-standing member of the ATLAS Collaboration, CERN physicist Nick Ellis was one of the original architects of the ATLAS Trigger. Working in the 1980s and 1990s, Nick led groups developing innovative ways to move and process huge quantities of data for the next generation of colliders. It was a challenge some thought was impossible to meet. Nick currently leads the CERN ATLAS Trigger and Data Acquisition Group and shared his wealth of experience as a key part of the ATLAS Collaboration.

I first became involved in what was to become the ATLAS Collaboration in the mid- to late-1980s. I had been working on the UA1 experiment at CERN’s SPS proton–antiproton collider for several years on various physics analyses and also playing a leading role on the UA1 trigger.

People were starting to think about experiments for higher-energy machines, such as the Large Hadron Collider (LHC) and the never-completed Superconducting Super Collider (SSC). Of course, at this point there was no ATLAS or CMS or even the precursors. There were just groups of people getting together to discuss ideas.

I remember a first discussion I had about possibilities for the trigger in LHC experiments was over a coffee in CERN’s Restaurant 1 with Peter Jenni. He was on the UA2 experiment at the time and, together with a number of colleagues, was developing ideas for an LHC experiment. Peter later went on to lead the ATLAS Collaboration for over a decade. He told me that nobody was looking at how the trigger system might be designed, and he asked if I would like to develop something. So I did.


“At the time, we did not know that the future held so much possibility in terms of programmable logic. The early ideas for the first-level trigger were based on relatively primitive electronics: modules with discrete logic, memories and some custom integrated circuits.”


The ATLAS trigger is a multilevel system that selects events that are potentially interesting for physics studies from a much larger number of events. It is very challenging since we start off with an interaction rate of the order of a billion per second. In the first stage of the selection, that has to be done within a few millionths of a second, the event rate must be reduced to about 100 kHz, four orders of magnitude below to the interaction rate, i.e. only one in ten thousand collisions can give rise to a first-level trigger. Note that each event, corresponding to a given bunch crossing, contains many tens of interactions. The rate must then be brought down by a further two orders of magnitude before the data is recorded for offline analysis.

When I start working on such a complex technical problem, I sit down with a pen and paper and draw diagrams. It’s important to visualise the system. A trigger and data-acquisition system is complicated – you have data being produced, data being processed, data being moved. So, I make a sketch with arrows, writing down order of magnitude numbers, what has to talk to what, what signals have to be sent. These are very rough notes! I doubt anyone other than me would be able to read my sketches that fed into the early designs of ATLAS’ trigger.

Though I was specifically looking at the first-level calorimeter trigger, which was what I was working on at UA1, I was interested in the trigger more generally. At the time, we did not know that the future held so much possibility in terms of programmable logic. The early ideas for the first-level trigger were based on relatively primitive electronics: modules with discrete logic, memories and some custom integrated circuits.

There was also concern that the second-level trigger processing would be hard to implement, because those triggers would require too much data to move and too much data to process. Here, the first thing I had to do was to demonstrate that it could be done at all! I carried out an intellectual exercise to try and factorise the problem, to the maximal extent possible. I was driven to do this because it was so interesting, and it was virgin territory. There were no constraints on ideas that could be explored.

My initial studies were on a maximally-factorised model, the so-called “local–global scheme”. It was never my objective that one would necessarily implement this exact scheme, but I used it as the basis for brainstorming a region-of-interest (ROI) strategy for the trigger. The triggers would look at specified regions of the detector, identified by the first-level trigger, for features of interest, rather than trying to search for features everywhere in the event. This exercise demonstrated that, at any given point in the system, you could get the data movement and computation down to a manageable level.

Trigger,Technology,Computing,ATLAS
The ATLAS Level-1 Calorimeter Trigger, located underground in a cavern adjacent to the experiment. (Image: K. Anthony/ATLAS Collaboration)

I, along with a few colleagues, developed this exercise into a study that we presented at the 1990 Large Hadron Collider workshop in Aachen, Germany. In the end, thanks to technological progress, it was not necessary to exploit all the ingredients used in the study. In more specific words, instead of separating the processing for each ROI and for each detector, we were able to use a single processor to process fully all of the ROIs in an event. The use of the first-level trigger to guide the second-level data access and processing became a key part of the ATLAS trigger philosophy.

In the years following the Aachen workshop, the ATLAS and CMS experiments began to take shape. It was a really exciting time, and the number of people involved was tiny in comparison to today. You could do anything and everything; you could come with completely new ideas!

When first beams and first collisions finally came, things went more smoothly than I had ever dared to hope. We had spent a lot of time planning for what would come when the first single beam came, when the first collisions came, what would we do, in what order, what might go wrong and how could we mitigate it. It has always been in my nature to think ahead about all the potential problems and make plans that let us avoid future issues, ensuring that systems are robust so that a local problem does not become a global problem. Thanks to the work of excellent, dedicated colleagues, everything went really well for first collisions!

Clearly ATLAS has a long future ahead of it, although we will always face challenges: the upgrades we have planned are by no means trivial! Even with our existing infrastructure and experience, there will no doubt be obstacles that we will have to overcome.

And, of course, in the even longer term, CERN itself could change, depending on what happens in physics and on the global stage. It wouldn’t be the first laboratory to do so – just look at DESY and SLAC. Even Fermilab has changed from a collider to a neutrino facility. We never know where the next big discovery will lead us!


ATLAS Portraits is a new series of interviews presenting collaborators whose contributions have helped shape the ATLAS experiment. Look forward to further ATLAS Portraits in the coming months