Access to Collaboration Site
Updates tagged: “computing”
2016 has been a record-breaking year. The LHC surpassed its design luminosity and produced stable beams a staggering 60% of the time – up from 40% in previous years, and even surpassing the hoped for 50% threshold. While all of the ATLAS experiment rejoiced – eager to analyse the vast outpouring of data from the experiment – its computing experts had their work cut out for them.
My colleagues and I are in town to attend the 22nd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2016, for short). I like to think of us as the nerds of the nerds. Computing, networking, software, middleware, bandwidth, and processors are the topics of discussion, and there is indeed much to talk about.
The Chinese Academy of Sciences (CAS) has awarded members of the ATLAS computing community first prize for their novel use of supercomputer infrastructure.
At the ATLAS experiment, masterful computing infrastructure is transforming raw data from the detector into particles for analysis, with a set direction, energy and type.
This is the last part of my attempt to explain our simulation software. You can read Part 1, about event generators, and Part 2, about detector simulation, if you want to catch up. Just as a reminder, we’re trying to help our theorist friend by searching for his proposed “meons” in our data.
I’ve been working on our simulation software for a long time, and I’m often asked “what on earth is that?” This is my attempt to help you love simulation as much as I do.
Having spent many hours working on the simulation software in ATLAS, I thought this would be a good place to explain what on earth that is (H/T to Al Brooks for the title). Our experiment wouldn’t run without the simulation, and yet there are few people who really understand it.
Dave Charlton and his team have a mammoth job on their hands; Charlton has been tasked with coordinating the Full Dress Rehearsal (FDR) of the computing and data analysis processes of the ATLAS experiment, a run–through which he describes as "essential, almost as much as ensuring the detector itself actually works".
On 6th August ATLAS reached a major milestone for its Distributed Data Management project - copying its first PetaByte (1015 Bytes) of data out from CERN to computing centers around the world. This achievement is part of the so-called 'Tier-0 exercise' running since 19th June, where simulated fake data is used to exercise the expected data flow within the CERN computing centre and out over the Grid to the Tier-1 computing centers as would happen during the real data taking.