24
24
ceph-authtool /dev/stdout --name=mon. --gen-key
26
26
These two pieces of configuration must NOT be changed post bootstrap; attempting
27
todo this will cause a reconfiguration error and new service units will not join
27
to do this will cause a reconfiguration error and new service units will not join
28
28
the existing ceph cluster.
30
The charm also supports specification of the storage devices to use in the ceph
30
The charm also supports the specification of storage devices to be used in the
34
34
A list of devices that the charm will attempt to detect, initialise and
73
73
This charm uses the new-style Ceph deployment as reverse-engineered from the
74
74
Chef cookbook at https://github.com/ceph/ceph-cookbooks, although we selected
75
a different strategy to form the monitor cluster. Since we don't know the
76
names *or* addresses of the machines in advance, we use the relation-joined
75
a different strategy to form the monitor cluster. Since we don't know the
76
names *or* addresses of the machines in advance, we use the _relation-joined_
77
77
hook to wait for all three nodes to come up, and then write their addresses
78
78
to ceph.conf in the "mon host" parameter. After we initialize the monitor
79
79
cluster a quorum forms quickly, and OSD bringup proceeds.
81
The osds use so-called "OSD hotplugging". ceph-disk-prepare is used to create
82
the filesystems with a special GPT partition type. udev is set up to mounti
83
such filesystems and start the osd daemons as their storage becomes visible to
84
the system (or after "udevadm trigger").
81
The osds use so-called "OSD hotplugging". **ceph-disk-prepare** is used to
82
create the filesystems with a special GPT partition type. *udev* is set up
83
to mount such filesystems and start the osd daemons as their storage becomes
84
visible to the system (or after "udevadm trigger").
86
The Chef cookbook above performs some extra steps to generate an OSD
86
The Chef cookbook mentioned above performs some extra steps to generate an OSD
87
87
bootstrapping key and propagate it to the other nodes in the cluster. Since
88
88
all OSDs run on nodes that also run mon, we don't need this and did not