Updates tagged: “computing”
2016 has been a record-breaking year. The LHC surpassed its design luminosity and produced stable beams a staggering 60% of the time – up from 40% in previous years, and even surpassing the hoped for 50% threshold. While all of the ATLAS experiment rejoiced – eager to analyse the vast outpouring of data from the experiment – its computing experts had their work cut out for them.
My colleagues and I are in town to attend the 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016, for short). I like to think of us as the nerds of the nerds. Computing, networking, software, middleware, bandwidth, and processors are the topics of discussion, and there is indeed much to talk about.
The Chinese Academy of Sciences (CAS) has awarded members of the ATLAS computing community first prize for their novel use of supercomputer infrastructure.
At the ATLAS experiment, masterful computing infrastructure is transforming raw data from the detector into particles for analysis, with a set direction, energy and type.
The ATLAS Outstanding Achievement Awards 2015 were presented on 18 June to 26 physicists and engineers, in 11 groups, for their excellent work carried out during Long Shutdown 1 (LS1).
This is the last part of my attempt to explain our simulation software. You can read Part 1, about event generators, and Part 2, about detector simulation, if you want to catch up. Just as a reminder, we’re trying to help our theorist friend by searching for his proposed “meons” in our data.
I’ve been working on our simulation software for a long time, and I’m often asked “what on earth is that?” This is my attempt to help you love simulation as much as I do.
Having spent many hours working on the simulation software in ATLAS, I thought this would be a good place to explain what on earth that is (H/T to Al Brooks for the title). Our experiment wouldn’t run without the simulation, and yet there are few people who really understand it.
I've been lucky to get to make two workshop / conference stops on a trip that started at the very beginning of October. The first was at Kinematic Variables for New Physics, hosted at Caltech. Now I'm up at the Computing in High Energy Physics conference in Amsterdam. Going to conferences and workshops is a big part of what we do, in order to explain our work to others and share what great things we're doing, in order to hear the latest on other people's work, and - and this one is important - in order to get to talk with colleagues about what we should do next.
The LHC is designed to collide bunches of protons every 25 ns, i.e., at a 40 MHz rate (40 million/second). In each of these collisions, something happens. Since there is no way we can collect data at this rate, we try to pick only the interesting events, which occur very infrequently; however, this is easier said than done. Experiments like ATLAS employ a very sophisticated filtering system to keep only those events that we are interested in. This is called the trigger system, and it works because the interesting events have unique signatures that can be used to distinguish them from the uninteresting ones.