5
Ceph is a distributed network storage and file system with distributed
6
metadata management and POSIX semantics.
8
RADOS is a reliable object store, used by Ceph, but also directly
11
``radosgw`` is an S3-compatible RESTful HTTP service for object
12
storage, using RADOS storage.
14
RBD is a Linux kernel feature that exposes RADOS storage as a block
15
device. Qemu/KVM also has a direct RBD client, that avoids the kernel
19
.. index:: monitor, ceph-mon
25
``ceph-mon`` is a lightweight daemon that provides a consensus for
26
distributed decisionmaking in a Ceph/RADOS cluster.
28
It also is the initial point of contact for new clients, and will hand
29
out information about the topology of the cluster, such as the
32
You normally run 3 ``ceph-mon`` daemons, on 3 separate physical machines,
33
isolated from each other; for example, in different racks or rows.
35
You could run just 1 instance, but that means giving up on high
38
You may use the same hosts for ``ceph-mon`` and other purposes.
40
``ceph-mon`` processes talk to each other using a Paxos_\-style
41
protocol. They discover each other via the ``[mon.X] mon addr`` fields
44
.. todo:: What about ``monmap``? Fact check.
46
Any decision requires the majority of the ``ceph-mon`` processes to be
47
healthy and communicating with each other. For this reason, you never
48
want an even number of ``ceph-mon``\s; there is no unambiguous majority
49
subgroup for an even number.
51
.. _Paxos: http://en.wikipedia.org/wiki/Paxos_algorithm
53
.. todo:: explain monmap
56
.. index:: RADOS, OSD, ceph-osd, object
62
``ceph-osd`` is the storage daemon that provides the RADOS service. It
63
uses ``ceph-mon`` for cluster membership, services object read/write/etc
64
request from clients, and peers with other ``ceph-osd``\s for data
67
The data model is fairly simple on this level. There are multiple
68
named pools, and within each pool there are named objects, in a flat
69
namespace (no directories). Each object has both data and metadata.
71
The data for an object is a single, potentially big, series of
72
bytes. Additionally, the series may be sparse, it may have holes that
73
contain binary zeros, and take up no actual storage.
75
The metadata is an unordered set of key-value pairs. It's semantics
76
are completely up to the client; for example, the Ceph filesystem uses
77
metadata to store file owner etc.
79
.. todo:: Verify that metadata is unordered.
81
Underneath, ``ceph-osd`` stores the data on a local filesystem. We
82
recommend using Btrfs_, but any POSIX filesystem that has extended
83
attributes should work.
85
.. _Btrfs: http://en.wikipedia.org/wiki/Btrfs
87
.. todo:: write about access control
89
.. todo:: explain osdmap
91
.. todo:: explain plugins ("classes")
94
.. index:: Ceph filesystem, Ceph Distributed File System, MDS, ceph-mds
100
The Ceph filesystem service is provided by a daemon called
101
``ceph-mds``. It uses RADOS to store all the filesystem metadata
102
(directories, file ownership, access modes, etc), and directs clients
103
to access RADOS directly for the file contents.
105
The Ceph filesystem aims for POSIX compatibility, except for a few
106
chosen differences. See :doc:`/appendix/differences-from-posix`.
108
``ceph-mds`` can run as a single process, or it can be distributed out to
109
multiple physical machines, either for high availability or for
112
For high availability, the extra ``ceph-mds`` instances can be `standby`,
113
ready to take over the duties of any failed ``ceph-mds`` that was
114
`active`. This is easy because all the data, including the journal, is
115
stored on RADOS. The transition is triggered automatically by
118
For scalability, multiple ``ceph-mds`` instances can be `active`, and they
119
will split the directory tree into subtrees (and shards of a single
120
busy directory), effectively balancing the load amongst all `active`
123
Combinations of `standby` and `active` etc are possible, for example
124
running 3 `active` ``ceph-mds`` instances for scaling, and one `standby`.
126
To control the number of `active` ``ceph-mds``\es, see
127
:doc:`/ops/manage/grow/mds`.
129
.. topic:: Status as of 2011-09:
131
Multiple `active` ``ceph-mds`` operation is stable under normal
132
circumstances, but some failure scenarios may still cause
135
.. todo:: document `standby-replay`
137
.. todo:: mds.0 vs mds.alpha etc details
140
.. index:: RADOS Gateway, radosgw
146
``radosgw`` is a FastCGI service that provides a RESTful_ HTTP API to
147
store objects and metadata. It layers on top of RADOS with its own
148
data formats, and maintains it's own user database, authentication,
149
access control, and so on.
151
.. _RESTful: http://en.wikipedia.org/wiki/RESTful
154
.. index:: RBD, Rados Block Device
157
Rados Block Device (RBD)
158
========================
160
In virtual machine scenarios, RBD is typically used via the ``rbd``
161
network storage driver in Qemu/KVM, where the host machine uses
162
``librbd`` to provide a block device service to the guest.
164
Alternatively, as no direct ``librbd`` support is available in Xen,
165
the Linux kernel can act as the RBD client and provide a real block
166
device on the host machine, that can then be accessed by the
167
virtualization. This is done with the command-line tool ``rbd`` (see
170
The latter is also useful in non-virtualized scenarios.
172
Internally, RBD stripes the device image over multiple RADOS objects,
173
each typically located on a separate ``ceph-osd``, allowing it to perform
174
better than a single server could.
180
.. todo:: cephfs, ceph-fuse, librados, libcephfs, librbd
183
.. todo:: Summarize how much Ceph trusts the client, for what parts (security vs reliability).
189
.. todo:: Example scenarios Ceph projects are/not suitable for