208
206
The process for the other components look similar.
210
A snapshot is a read-only copy of the state of an image at a particular point in time. One
211
of the advanced features of Ceph block devices is that you can create snapshots of the images
212
to retain a history of an image’s state. Ceph also supports snapshot layering, which allows
213
you to clone images (e.g., a VM image) quickly and easily. Ceph supports block device snapshots
214
using the `rbd` command and many higher level interfaces including OpenStack.
218
To create a snapshot with `rbd`, specify the `snap create` option, the pool name and the
222
rbd --pool {pool-name} snap create --snap {snap-name} {image-name}
223
rbd snap create {pool-name}/{image-name}@{snap-name}
229
rbd --pool rbd snap create --snap snapname foo
230
rbd snap create rbd/foo@snapname
235
To rollback to a snapshot with `rbd`, specify the `snap rollback` option, the pool name, the
236
image name and the snap name.
239
rbd --pool {pool-name} snap rollback --snap {snap-name} {image-name}
240
rbd snap rollback {pool-name}/{image-name}@{snap-name}
246
rbd --pool rbd snap rollback --snap snapname foo
247
rbd snap rollback rbd/foo@snapname
250
**Note:** Rolling back an image to a snapshot means overwriting the current version of the image
251
with data from a snapshot. The time it takes to execute a rollback increases with the size of the
252
image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is
253
the preferred method of returning to a pre-existing state.
257
Taking snapshots increases your level of security but also costs disk space. To delete older ones
258
you can list them, delete individual ones or purge all snapshots.
260
To list snapshots of an image, specify the pool name and the image name.
263
rbd --pool {pool-name} snap ls {image-name}
264
rbd snap ls {pool-name}/{image-name}
270
rbd --pool rbd snap ls foo
274
To delete a snapshot with `rbd`, specify the `snap rm` option, the pool name, the image name
278
rbd --pool {pool-name} snap rm --snap {snap-name} {image-name}
279
rbd snap rm {pool-name}/{image-name}@{snap-name}
285
rbd --pool rbd snap rm --snap snapname foo
286
rbd snap rm rbd/foo@snapname
289
**Note:** Ceph OSDs delete data asynchronously, so deleting a snapshot doesn’t free up the
290
disk space immediately.
292
To delete all snapshots for an image with `rbd`, specify the snap purge option and the
296
rbd --pool {pool-name} snap purge {image-name}
297
rbd snap purge {pool-name}/{image-name}
303
rbd --pool rbd snap purge foo
304
rbd snap purge rbd/foo
210
307
## Upgrades and Patching
417
516
### Using Ceph for storage
419
**TODO(mue)** Compare standard procedure to our environment.
518
As documented in the *OpenStack Installation Guide* we're using Ceph as block device
519
for Cinder. It stripes block device images as objects across a cluster. This way it
520
provides a better performance than a typical standalone server. So it allows scalability
521
and redundancy needs to be satisified and Cinder's RBD driver used to create, export and
522
connect volumes to instances. This assumes a functioning Ceph cluster has already been
523
deployed using the official Ceph charm like descibed in the document mentioned above.
524
Here our `openstack-config.yaml` contains the setting `block-device: None` for Cinder.
525
This way we don't use the local device for Cinder.
527
The next step has to be adding the relations between Cinder and Ceph as well as Cinder
528
and other needed services.
531
juju add-relation cinder ceph
532
juju add-relation cinder keystone
533
juju add-relation cinder mysql
534
juju add-relation cinder rabbitmq-server
535
juju add-relation cinder nova-cloud-controller
538
The addition of Ceph nodes is done using the Juju `add-node` command. By default
539
it adds only one node, but it is possible to add the number of wanted nodes as
546
$ juju add-node ceph -n 10
549
**SEE ALSO** https://jujucharms.com/precise/ceph-22/#readme
551
**TODO(mue)** Description of how to tell MAAS to provide nodes for Ceph and to
552
explicitely choose them by their specs is missing.
554
**TODO(mue)** Hmm, text below seems not to be needed. Still useful in a different way?
421
556
Ceph stripes block device images as objects across a cluster. This way it provides
422
557
a better performance than standalone server. OpenStack is able to use Ceph Block Devices
642
777
Once OpenStack is up and running, you should be able to create a volume
643
778
and boot from it.
645
#### Adding Ceph nodes
647
The addition of Ceph nodes is done using the Juju `add-node` command. By default
648
it adds only one node, but it is possible to add the number of wanted nodes as
655
$ juju add-node ceph -n 10
658
**TODO(mue)** How are the added nodes integrated to Cinder?
660
**SEE ALSO** https://jujucharms.com/precise/ceph-22/#readme
662
780
### Adding Nova instances