1
=======================================
2
mkcephfs -- create a ceph file system
3
=======================================
10
| **mkcephfs** [ -c *ceph.conf* ] [ --mkbtrfs ] [ -a, --all-hosts [ -k
11
*/path/to/admin.keyring* ] ]
17
**mkcephfs** is used to create an empty Ceph file system, possibly
18
spanning multiple hosts. The ceph.conf file describes the composition
19
of the entire Ceph cluster, including which hosts are participating,
20
which daemons run where, and which paths are used to store file system
23
The mkcephfs tool can be used in two ways. If -a is used, it will use
24
ssh and scp to connect to remote hosts on your behalf and do the setup
25
of the entire cluster. This is the easiest solution, but can also be
26
inconvenient (if you don't have ssh to connect without prompting for
27
passwords) or slow (if you have a large cluster).
29
Alternatively, you can run each setup phase manually. First, you need
30
to prepare a monmap that will be shared by each node::
33
master# mkdir /tmp/foo
34
master# mkcephfs -c /etc/ceph/ceph.conf \
35
--prepare-monmap -d /tmp/foo
37
Share the ``/tmp/foo`` directory with other nodes in whatever way is
38
convenient for you. On each OSD and MDS node::
40
osdnode# mkcephfs --init-local-daemons osd -d /tmp/foo
41
mdsnode# mkcephfs --init-local-daemons mds -d /tmp/foo
43
Collect the contents of the /tmp/foo directories back onto a single
46
master# mkcephfs --prepare-mon -d /tmp/foo
48
Finally, distribute ``/tmp/foo`` to all monitor nodes and, on each of
51
monnode# mkcephfs --init-local-daemons mon -d /tmp/foo
57
.. option:: -a, --allhosts
59
Performs the necessary initialization steps on all hosts in the
60
cluster, executing commands via SSH.
62
.. option:: -c ceph.conf, --conf=ceph.conf
64
Use the given conf file instead of the default ``/etc/ceph/ceph.conf``.
66
.. option:: -k /path/to/keyring
68
When ``-a`` is used, we can specify a location to copy the
69
client.admin keyring, which is used to administer the cluster. The
70
default is ``/etc/ceph/keyring`` (or whatever is specified in the
75
Create and mount the any btrfs file systems specified in the
76
ceph.conf for OSD data storage using mkfs.btrfs. The "btrfs devs"
77
and (if it differs from "osd data") "btrfs path" options must be
80
**NOTE** Btrfs is still considered experimental. This option
81
can ease some configuration pain, but is the use of btrfs is not
82
required when ``osd data`` directories are mounted manually by the
85
**NOTE** This option is deprecated and will be removed in a future
88
.. option:: --no-copy-conf
90
By default, mkcephfs with -a will copy the new configuration to
91
/etc/ceph/ceph.conf on each node in the cluster. This option
92
disables that behavior.
97
The sub-commands performed during cluster setup can be run individually with
99
.. option:: --prepare-monmap -d dir -c ceph.conf
101
Create an initial monmap with a random fsid/uuid and store it and
102
the ceph.conf in dir.
104
.. option:: --init-local-daemons type -d dir
106
Initialize any daemons of type type on the local host using the
107
monmap in dir. For types osd and mds, the resulting authentication
108
keys will be placed in dir. For type mon, the initial data files
109
generated by --prepare-mon (below) are expected in dir.
111
.. option:: --prepare-mon -d dir
113
Prepare the initial monitor data based on the monmap, OSD, and MDS
114
authentication keys collected in dir, and put the result in dir.
120
**mkcephfs** is part of the Ceph distributed file system. Please refer
121
to the Ceph wiki at http://ceph.newdream.net/wiki for more
128
:doc:`ceph <ceph>`\(8),
129
:doc:`monmaptool <monmaptool>`\(8),
130
:doc:`osdmaptool <osdmaptool>`\(8),
131
:doc:`crushtool <crushtool>`\(8)