9.1.1
by Frank Mueller
Splitted the Cloud Administration Guide into individual documents |
1 |
Title: Appendix - Ceph and OpenStack |
2 |
Status: Done |
|
3 |
||
4 |
# Appendix: Ceph and OpenStack
|
|
5 |
||
6 |
Ceph stripes block device images as objects across a cluster. This way it provides |
|
7 |
a better performance than standalone server. OpenStack is able to use Ceph Block Devices |
|
8 |
through `libvirt`, which configures the QEMU interface to `librbd`. |
|
9 |
||
10 |
To use Ceph Block Devices with OpenStack, you must install QEMU, `libvirt`, and OpenStack
|
|
11 |
first. It's recommended to use a separate physical node for your OpenStack installation. |
|
12 |
OpenStack recommends a minimum of 8GB of RAM and a quad-core processor. |
|
13 |
||
14 |
Three parts of OpenStack integrate with Ceph’s block devices: |
|
15 |
||
16 |
- Images: OpenStack Glance manages images for VMs. Images are immutable. OpenStack
|
|
17 |
treats images as binary blobs and downloads them accordingly. |
|
18 |
- Volumes: Volumes are block devices. OpenStack uses volumes to boot VMs, or to
|
|
19 |
attach volumes to running VMs. OpenStack manages volumes using Cinder services. |
|
20 |
- Guest Disks: Guest disks are guest operating system disks. By default, when you
|
|
21 |
boot a virtual machine, its disk appears as a file on the filesystem of the |
|
22 |
hypervisor (usually under /var/lib/nova/instances/<uuid>/). Prior OpenStack Havana, |
|
23 |
the only way to boot a VM in Ceph was to use the boot from volume functionality |
|
24 |
from Cinder. However, now it is possible to directly boot every virtual machine |
|
25 |
inside Ceph without using Cinder. This is really handy because it allows us to |
|
26 |
easily perform maintenance operation with the live-migration process. On the other |
|
27 |
hand, if your hypervisor dies it is also really convenient to trigger Nova evacuate |
|
28 |
and almost seamlessly run the virtual machine somewhere else. |
|
29 |
||
30 |
You can use OpenStack Glance to store images in a Ceph Block Device, and you can |
|
31 |
use Cinder to boot a VM using a copy-on-write clone of an image. |
|
32 |
||
33 |
## Create a pool
|
|
34 |
||
35 |
By default, Ceph block devices use the `rbd` pool. You may use any available pool.
|
|
36 |
We recommend creating a pool for Cinder and a pool for Glance. Ensure your Ceph |
|
37 |
cluster is running, then create the pools. |
|
38 |
||
39 |
```` |
|
40 |
ceph osd pool create volumes 128 |
|
41 |
ceph osd pool create images 128 |
|
42 |
ceph osd pool create backups 128 |
|
43 |
```` |
|
44 |
||
45 |
## Configure OpenStack Ceph Clients
|
|
46 |
||
47 |
The nodes running `glance-api`, `cinder-volume`, `nova-compute` and `cinder-backup` act |
|
48 |
as Ceph clients. Each requires the `ceph.conf` file
|
|
49 |
||
50 |
```` |
|
51 |
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf |
|
52 |
```` |
|
53 |
||
54 |
On the `glance-api` node, you’ll need the Python bindings for `librbd` |
|
55 |
||
56 |
```` |
|
57 |
sudo apt-get install python-ceph |
|
58 |
sudo yum install python-ceph |
|
59 |
```` |
|
60 |
||
61 |
On the `nova-compute`, `cinder-backup` and on the `cinder-volume` node, use both the |
|
62 |
Python bindings and the client command line tools |
|
63 |
||
64 |
```` |
|
65 |
sudo apt-get install ceph-common |
|
66 |
sudo yum install ceph |
|
67 |
```` |
|
68 |
||
69 |
If you have cephx authentication enabled, create a new user for Nova/Cinder and |
|
70 |
Glance. Execute the following |
|
71 |
||
72 |
```` |
|
73 |
ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' |
|
74 |
ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images' |
|
75 |
ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups' |
|
76 |
```` |
|
77 |
||
78 |
Add the keyrings for `client.cinder`, `client.glance`, and `client.cinder-backup` |
|
79 |
to the appropriate nodes and change their ownership |
|
80 |
||
81 |
```` |
|
82 |
ceph auth get-or-create client.glance | ssh {your-glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring |
|
83 |
ssh {your-glance-api-server} sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring |
|
84 |
ceph auth get-or-create client.cinder | ssh {your-volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring |
|
85 |
ssh {your-cinder-volume-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring |
|
86 |
ceph auth get-or-create client.cinder-backup | ssh {your-cinder-backup-server} sudo tee /etc/ceph/ceph.client.cinder-backup.keyring |
|
87 |
ssh {your-cinder-backup-server} sudo chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring |
|
88 |
```` |
|
89 |
||
90 |
Nodes running `nova-compute` need the keyring file for the `nova-compute` process. |
|
91 |
They also need to store the secret key of the `client.cinder` user in `libvirt`. The |
|
92 |
`libvirt` process needs it to access the cluster while attaching a block device
|
|
93 |
from Cinder. |
|
94 |
||
95 |
Create a temporary copy of the secret key on the nodes running `nova-compute`
|
|
96 |
||
97 |
```` |
|
98 |
ceph auth get-key client.cinder | ssh {your-compute-node} tee client.cinder.key |
|
99 |
```` |
|
100 |
||
101 |
Then, on the compute nodes, add the secret key to `libvirt` and remove the
|
|
102 |
temporary copy of the key |
|
103 |
||
104 |
```` |
|
105 |
uuidgen |
|
106 |
457eb676-33da-42ec-9a8c-9293d545c337 |
|
107 |
||
108 |
cat > secret.xml <<EOF |
|
109 |
<secret ephemeral='no' private='no'> |
|
110 |
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid> |
|
111 |
<usage type='ceph'> |
|
112 |
<name>client.cinder secret</name> |
|
113 |
</usage> |
|
114 |
</secret> |
|
115 |
EOF |
|
116 |
sudo virsh secret-define --file secret.xml |
|
117 |
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created |
|
118 |
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml |
|
119 |
```` |
|
120 |
||
121 |
Save the uuid of the secret for configuring `nova-compute` later.
|
|
122 |
||
123 |
**Important** You don’t necessarily need the UUID on all the compute nodes.
|
|
124 |
However from a platform consistency perspective it’s better to keep the |
|
125 |
same UUID. |
|
126 |
||
127 |
## Configure OpenStack to use Ceph
|
|
128 |
||
129 |
### Glance
|
|
130 |
||
131 |
Glance can use multiple back ends to store images. To use Ceph block devices |
|
132 |
by default, edit `/etc/glance/glance-api.conf` and add
|
|
133 |
||
134 |
```` |
|
135 |
default_store=rbd |
|
136 |
rbd_store_user=glance |
|
137 |
rbd_store_pool=images |
|
138 |
```` |
|
139 |
||
140 |
If want to enable copy-on-write cloning of images into volumes, also add: |
|
141 |
||
142 |
```` |
|
143 |
show_image_direct_url=True |
|
144 |
```` |
|
145 |
||
146 |
Note that this exposes the back end location via Glance’s API, so |
|
147 |
the endpoint with this option enabled should not be publicly |
|
148 |
accessible. |
|
149 |
||
150 |
### Cinder
|
|
151 |
||
152 |
OpenStack requires a driver to interact with Ceph block devices. You |
|
153 |
must also specify the pool name for the block device. On your |
|
154 |
OpenStack node, edit `/etc/cinder/cinder.conf` by adding
|
|
155 |
||
156 |
```` |
|
157 |
volume_driver=cinder.volume.drivers.rbd.RBDDriver |
|
158 |
rbd_pool=volumes |
|
159 |
rbd_ceph_conf=/etc/ceph/ceph.conf |
|
160 |
rbd_flatten_volume_from_snapshot=false |
|
161 |
rbd_max_clone_depth=5 |
|
162 |
glance_api_version=2 |
|
163 |
```` |
|
164 |
||
165 |
If you’re using cephx authentication, also configure the user and |
|
166 |
uuid of the secret you added to `libvirt` as documented earlier
|
|
167 |
||
168 |
```` |
|
169 |
rbd_user=cinder |
|
170 |
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
|
171 |
```` |
|
172 |
||
173 |
## Cinder Backup
|
|
174 |
||
175 |
OpenStack Cinder Backup requires a specific daemon so don’t |
|
176 |
forget to install it. On your Cinder Backup node, |
|
177 |
edit `/etc/cinder/cinder.conf` and add:
|
|
178 |
||
179 |
```` |
|
180 |
backup_driver=cinder.backup.drivers.ceph |
|
181 |
backup_ceph_conf=/etc/ceph/ceph.conf |
|
182 |
backup_ceph_user=cinder-backup |
|
183 |
backup_ceph_chunk_size=134217728 |
|
184 |
backup_ceph_pool=backups |
|
185 |
backup_ceph_stripe_unit=0 |
|
186 |
backup_ceph_stripe_count=0 |
|
187 |
restore_discard_excess_bytes=true |
|
188 |
```` |
|
189 |
||
190 |
### Nova
|
|
191 |
||
192 |
In order to boot all the virtual machines directly into Ceph Nova must be |
|
193 |
configured. On every Compute nodes, edit `/etc/nova/nova.conf` and add
|
|
194 |
||
195 |
```` |
|
196 |
libvirt_images_type=rbd |
|
197 |
libvirt_images_rbd_pool=volumes |
|
198 |
libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf |
|
199 |
rbd_user=cinder |
|
200 |
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337 |
|
201 |
```` |
|
202 |
||
203 |
It is also a good practice to disable any file injection. Usually, while |
|
204 |
booting an instance Nova attempts to open the rootfs of the virtual machine. |
|
205 |
Then, it injects directly into the filesystem things like: password, ssh |
|
206 |
keys etc... At this point, it is better to rely on the metadata service |
|
207 |
and cloud-init. On every Compute nodes, edit `/etc/nova/nova.conf` and add
|
|
208 |
||
209 |
```` |
|
210 |
libvirt_inject_password=false |
|
211 |
libvirt_inject_key=false |
|
212 |
libvirt_inject_partition=-2 |
|
213 |
```` |
|
214 |
||
215 |
## Restart OpenStack
|
|
216 |
||
217 |
To activate the Ceph block device driver and load the block device pool name |
|
218 |
into the configuration, you must restart OpenStack. |
|
219 |
||
220 |
```` |
|
221 |
sudo glance-control api restart |
|
222 |
sudo service nova-compute restart |
|
223 |
sudo service cinder-volume restart |
|
224 |
sudo service cinder-backup restart |
|
225 |
```` |
|
226 |
||
227 |
Once OpenStack is up and running, you should be able to create a volume |
|
228 |
and boot from it. |
|
229 |