Needle in a haystack

16 March 2012 | By

The LHC is designed to collide bunches of protons every 25 ns, i.e., at a 40 MHz rate (40 million/second). In each of these collisions, something happens. Since there is no way we can collect data at this rate, we try to pick only the interesting events, which occur very infrequently; however, this is easier said than done. Experiments like ATLAS employ a very sophisticated filtering system to keep only those events that we are interested in. This is called the trigger system, and it works because the interesting events have unique signatures that can be used to distinguish them from the uninteresting ones.

ATLAS,TDAQ,DAQ,Trigger system,DAQ and Trigger System,detector
The ATLAS Trigger and Data Acquisition System. (Image: ATLAS Experiment/CERN)

The ATLAS trigger system is a combination of electronic circuit boards and software running on hundreds of computers and is designed to reduce the 40 MHz collision rate to a manageable 200-400 events per second. Each event is expected to be around 1 Mbyte (for comparison, this post corresponds to about 4-5 kilobytes), so you can see that we are dealing with a lot of data. And, all this has to be done in real time. In a previous post, Regina Caputo gave an overview of triggers. Here I expand on that.

Before I get to the numbers of events that we collect, let me first explain a couple of concepts: cross-section of a particular process and luminosity. Cross-section is jargon; basically, it gives you a measure of the probability of a certain kind of event happening, and is a function of the energy of the collision. In general, higher the collision energy, higher is the cross-section of a process, especially if we are producing a heavy particle (there are some subtleties that I won’t get into now). Luminosity is a measure of the “intensity” of the beam. The product of Luminosity and Cross-section gives the number of events that are produced for a given process. The beauty of the trigger system is that it can be configured to pick the kinds of events we want to study.

One common kind of event happens when two protons “glance” off each other, without really breaking up; these are called Elastic Collisions”. Then you have protons colliding and breaking up, and producing “garden-variety” stuff, e.g., pions, kaons, protons, charm quarks, bottom quarks, etc; these are labelled Inelastic Collisions. The sum of all these processes is the “total cross-section”, and is about 70-80 millibarns at a collision energy of 7 TeV, i.e., 1/12th of barn; the concept of a “barn” probably derives from the expression “something is as easy as hitting the side of a barn”! So, a cross-section of 80 millibarns implies a very, very large probability (1 barn = 10-24 cm2 ). At collision energies of 14 TeV, this might increase by about 10-20%.

In contrast, the cross-section for producing a Higgs boson (with mass = 150 GeV, i.e., 150 times the mass of a proton) in 7 TeV collisions is approximately 8 picobarns (8*10-12 barns), i.e., approximately 10 billion times less than the “total cross-section”. The cross-section for producing top quarks is about 170 picobarns. Events containing a Higgs or top quarks have some unique signatures that are exploited by the trigger algorithms. (At 14 TeV, the cross-section for these interesting events can increase by as much as a factor of five, so you can see why we want to keep increasing the energy of these collisions).

The LHC is designed to have a luminosity of 1034 , i.e., looking head-on at the beam there are 1034 protons/square cm/second. In reality, each colliding bunch only has about 1011 protons, but they are squeezed into a circle with a radius of 0.003 cm, and come about 40 million times/sec. So, taking the product of cross-section and luminosity, we estimate that we will get approximately 109 “junk events”/second and 0.1 Higgs events/second! Of course, there are other interesting events that we would like to collect, e.g., those containing top quarks that come at a rate of 2 Hz. We also record some of the “garden-variety” events, because they are very useful in understanding how the detector is working. So, this is what the trigger does, separate what we want from what we don’t want, and all in “real time”.

As mentioned above, we plan to write to disk approximately 200-400 events per second, with each event being 1 MB in size. If we run the accelerator continuously for a year, we will collect (6-12)*1015 bytes of data, i.e., 6-12 petabytes; this will fill about 38,000-76,000 IPods (ones with 160 GB of storage)! Each event is then passed through the reconstruction software (see the ATLAS Blog "From 0-60 in 10 million seconds! – Part 1"), which only adds to its size; talk about standing in front of a fire hose!

P.S. For fun facts about ATLAS, check out the ATLAS pop-up book! You can find it on Facebook, watch a video on YouTube, and purchase it on Amazon.