LHC Tunnel

LHC Tunnel

Saturday, 21 September 2013

A tale of 3 OpenStack clouds : 50,000 cores in production at CERN

I had several questions at the recent GigaOm conference in London regarding the OpenStack environments at CERN. This blog explains the different instances and their distinct teams of evaluation and administration.

The CERN IT department provides services for around 11,000 physicists from around the world. Our central services provide the basic IT infrastructure that any organisation needs from e-mail, web services, databases and desktop support.

In addition, we provide computing resources for the physics analysis of the 35PB/year that comes from the Large Hadron Collider. The collider is currently being upgraded until 2015 in order to double the energy of the beams and enhance the detectors which are situated around the ring.

During this period, we have a window for significant change to the central IT services. While the analysis of the data from the first run of the LHC continues during this upgrade, the computing infrastructure in CERN IT is moving towards a cloud model based on popular open source technologies used outside of CERN such as Puppet and OpenStack.

As we started working with these communities through mailing lists and conferences, we encountered other people from High Energy Physics organisations around the world who were going through the same transition, such as IN2P3, Nectar, IHEP and Brookhaven. What was most surprising is when we found out that others on the CERN site were working on OpenStack and Puppet and that the CERN IT cloud was actually the smallest one!

The two largest experiments, ATLAS and CMS, both have large scale farms close to filter the 1PB/s of data from each detector before sending it to the CERN computer centre for recording and analysis.
These High Level Trigger farms are over 1000 servers, each typically 12 core systems. During the upgrade of the LHC, these servers would be idle as there are no collisions. However, the servers are attached to CERN's technical network, which is isolated from the rest of CERN's network as this network is used for systems which are closely associated with the accelerator and other online systems. Thus, they could not easily be used for running physics programs since the data is not accessible from the technical network.

In view of this, the approach taken was to use a cloud with the virtual machines not being able to access the technical network. This allows strong isolation and makes it very easy to start/stop large numbers of programs at a time. During CMS's tests, they were starting/stopping 250VMs in a 5 minute period.

For the software selection, each experiment makes its own choices independent of the CERN IT department. However, both teams selected OpenStack as their cloud software. The ATLAS cloud was set up with the help of Brookhaven National Laboratories who are also running OpenStack in their centre. The CMS cloud was set up and run by two CMS engineers . 

For configuration management, ATLAS online teams were already using Puppet and CMS migrated from Quattor to Puppet during the course of the project. This allowed them to use the Stackforge Puppet modules, as we do in the CERN IT department.

Both the experiment clouds are now in production, running physics simulation and some data analysis programs that can fit within the constraints of limited local storage and network I/O to the computer centre.

Thus, the current CERN cloud capacities are as follows.

Cloud Hypervisors Cores
ATLAS Sim@P1
1,200
28,800 (HT)
CMS OOOO cloud
1,300
13,000
CERN IT Ibex and Grizzly clouds
873
20,952 (HT)

This makes over 60,000 cores in total managed by OpenStack and Puppet at CERN.  With hyper-thread cores on both AMD and Intel CPUs, it is always difficult to judge exactly how much extra performance is achieved, thus my estimate of an effective delivery of 50,000 cores in total.

While the CERN IT cloud is currently only 20,000 cores, we are currently installing around 100 hypervisors a week in the two data centres in Geneva and Budapest so we would expect the cores in this area to significantly grow in the next 18 months as we are aiming for 300,000 cores in 2015.


3 comments: