LHC Tunnel

LHC Tunnel

Tuesday, 27 January 2015

Exceeding tracked connections

As we increase the capacity of the CERN OpenStack cloud, we've noticed a few cases of an interesting problem where hypervisors lose network connectivity. These hypervisors are KVM based running Scientific Linux CERN 6. The cloud itself is running Icehouse using Nova network.


Connection tracking refers to the ability to maintain state information about a connection in memory tables, such as source and destination ip address and port number pairs (known as socket pairs), protocol types, connection state and timeouts. Firewalls that do this are known as stateful. Stateful firewalling is inherently more secure than its "stateless" counterpart .... simple packet filtering.
More details are available at [1].

On busy hypervisors, in the syslog file, we have messages such as

Jan  4 23:14:44 hypervisor kernel: nf_conntrack: table full, dropping packet.

Searching around the internet, we found references to a number of documents [2][3] discussing the limit.

It appears that the default algorithm is pretty simple. For a 64 bit hypervisor,
  • If RAM < 1 GB, the maximum conntrack is set to RAM in bytes  / 32768
  • Otherwise, set to 65536
Our typical hypervisors contain 48GB of memory and 24 cores so a busy server handling physics distributed data access can easily use 1000s of connections, especially if sockets are not being closed correctly. With several instances of these servers on a hypervisor, it is easy to reach the 65536 limit and start to drop new connections.

To keep an eye on the usage, the current and maximum values can be checked using sysctl.

The current usage can be checked using

# sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 6650

The maximum value can be found as follows

# sysctl net.netfilter.nf_conntrack_max
net.netfilter.nf_conntrack_max = 65536

To avoid overload on the hypervisors, the hypervisor conntrack value can be increased and should then be set to the sum of the connections expected from each virtual machine. This can be done using the /etc/sysctl.d directory or with an appropriate configuration management tool.

Note that you'll need to set both net.netfilter.nf_conntrack_max as well as  net.nf_conntrack_max. For the CERN OpenStack cloud we have increased the values from 64k to 512k.


Thursday, 8 January 2015

Using bitnami images with OpenStack

At CERN, we generally use puppet to configure our production services using modules from puppetforge to quickly set up the appropriate parameters and services.

However, it is often interesting to try out a new software package for a quick investigation. For this in the past, people have used Bitnami on test systems or their laptops where they installed the operating system and then installed the Bitnami application packages.

With an OpenStack cloud, deploying Bitnami configurations can be even more quickly achieved. We are running OpenStack Icehouse and KVM or Hyper-V hypervisors.

The steps are as follows
  • Download the cloud image from Bitnami
  • Load the image into Glance
  • Deploy the image
  • Check the console for messages using Horizon
  • Use the application!
Since the operating system comes with the image, it also avoids issues with pre-requisites or unexpected configurations.

Getting the images from Bitnami

Bitnami provides installers which can be run on operating systems that have been previously installed but also cloud images which include the appropriate operating systems in the virtual machine image.

There are a number of public clouds supported such as Amazon and Azure but also private cloud images for VMware and Virtual Box. For this example, we use the images for Virtual Box as there is a single image file.

A wide variety of appliances are available. For this case, we show the use of Wordpress (of course :-). The Wordpress virtual machine private cloud list gives download links. This contains the appliance image of Ubuntu, middleware and Wordpress in a zip file containing

  • A OVF metadata file
  • A VMDK disk image
We only use the VMDK file.

Loading the VMDK into Glance

A VMDK file is a disk image like a QEMU KVM qcow2 one. KVM also supports VMDK so it is possible to load it directly into Glance. Alternatively, it can be converted into qcow2 using qemu-img if this is needed.

glance image-create --name wordpress-bitnami-vmdk  --file bitnami-wordpress-4.1-0-ubuntu-14.04-OVF-disk1.vmdk --disk-format vmdk --container-format=bare 

This creates the entry into Glance so that new VMs can be created.

Creating a new VM

The VM can then be instantiated from this image. 

nova boot --flavor m1.medium --image wordpress-bitnami-vmdk --key-name keypair hostname

The keyname and hostname text should be replaced by your preferred key pair and VM host name.

Check console

Using the graphical interface, you can see the details of the deployment. Since ssh is not enabled by default, you can't log in. Once booted, you will see a screen such as below which gives instructions on how to access the application.


If you wish to log in to the Linux shell, check the account details at the application page on Bitnami.

Use the application

The application can be accessed using the web URL shown on the console above. This allows you to investigate the potential of the application by running a few simple OpenStack and web commands for a working application instance in a few minutes.


Note: it is important that the application is kept up to date. Ensure you follow the Bitnami updates and ensure appropriate security of the server.