CERN IT is operating a 3 PetaByte Ceph cluster and one of our use-cases is to store our OpenStack volumes and images. For more details on Ceph cluster, Dan van der Ster's presentation is available at the following link.
After the migration to Havana, we started to provide the volume service to a wider audience and therefore needed to tune our cinder configuration.
This post will show you how we enabled the multi-backend in Cinder. We dealt with the migration of our Ceph volume to the new volume type. And finally we will look at the quality of service we want to enable on our newly created volume type.
We had already, Ceph configured as Default:
[DEFAULT]
...
quota_volumes=0
quota_snapshots=0
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_user=volumes
rbd_pool=volumes
rbd_secret_uuid=00000000-1111-2222-3333-000000000001
We added the following option in /etc/cinder/cinder.conf to enable the multi backend support:
[DEFAULT]
...
enabled_backends=standard
scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler
default_volume_type=standard
[standard]
volume_group=standard
rbd_user=volumes
rbd_pool=volumes
volume_backend_name=standard
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid=00000000-1111-2222-3333-000000000001
Create the volume type:
# cinder type-create standard
# cinder type-key standard set volume_backend_name=standard
To verify the type has been created you can run:
# cinder extra-specs-list
+--------------------------------------+----------+---------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+----------+---------------------------------------+
| c6ad034a-5d97-443b-97c6-58a8744bf99b | standard | {u'volume_backend_name': u'standard'} |
+--------------------------------------+----------+---------------------------------------+
A restart is needed after these changes:
# for i in volume api schedule; do service restart openstack-cinder-$i ; done
At this point it is not possible to attach/detach the volumes you created without a type.
Nova will fail with the following error:
nova.openstack.common.notifier.rpc_notifier ValueError: Circular reference detected
To fix it, we had to update manually the database, with two steps:
nova.openstack.common.notifier.rpc_notifier ValueError: Circular reference detected
To fix it, we had to update manually the database, with two steps:
mysqldump cinder (don't forget :)
- Update the volume_type_id column with the output of cinder extra-specs-list :
update volumes set volume_type_id="c6ad034a-5d97-443b-97c6-58a8744bf99b" where volume_type_id is NULL;
- For each controller the host column needs an update:
update volumes set host='p01@standard' where host='p01';
update volumes set host='p02@standard' where host='p02';
All volume are now of type standard and can be operate as usual. Be sure to have "default_volume_type" defined in your cinder.conf otherwise it will default to 'None' and these volumes will not be functional.
The last step is to delete from your DEFAULT section the old volume settings.
Since we have a large number of disks in the ceph store, there is a very high potential capacity for IOPS but we want to be sure that individual VMs cannot monopolise this capacity. This involves enabling the Quality-of-Service features in cinder.
Enabling QoS is straight forward:
# cinder qos-create standard-iops consumer="front-end" read_iops_sec=400 write_iops_sec=200
# cinder qos-associate 10a7b93c-38d7-4061-bfb8-78d01e2fe6d8 c6ad034a-5d97-443b-97c6-58a8744bf99b
If you want to add additional limits:
# cinder qos-key 10a7b93c-38d7-4061-bfb8-78d01e2fe6d8 set read_bytes_sec=80000000
# cinder qos-key 10a7b93c-38d7-4061-bfb8-78d01e2fe6d8 set write_bytes_sec=40000000
# cinder qos-list
For the QoS parameters to be activated for an existing volume, you need to detach and reattach the old volumes.
This information is collected from the cloud team from CERN and a big thank you to the Ceph team for the help.
No comments:
Post a Comment