1
.. index:: control, commands
10
Monitor commands are issued using the ceph utility::
12
$ ceph [-m monhost] command
14
where the command is usually (though not always) of the form::
16
$ ceph subsystem command
25
Cleanly shuts down the cluster. ::
29
Shows an overview of the current status of the cluster. ::
33
Shows a running summary of the status of the cluster, and major events.
37
Show the monitor quorum, including which monitors are participating and which one
40
$ ceph [-m monhost] mon_status
42
Query the status of a single monitor, including whether or not it is in the quorum.
49
$ ceph auth add <osd> <--in-file|-i> <path-to-osd-keyring>
51
Add auth keyring for an osd. ::
55
Show auth key OSD subsystem.
61
$ ceph -- pg dump [--format <format>]
63
Output the stats of all pgs. Valid formats are "plain" and "json",
64
and plain is the default. ::
66
$ ceph -- pg dump_stuck inactive|unclean|stale [--format <format>] [-t|--threshold <seconds>]
68
Output the stats of all PGs stuck in the specified state.
70
``--format`` may be ``plain`` (default) or ``json``
72
``--threshold`` defines how many seconds "stuck" is (default: 300)
74
**Inactive** PGs cannot process reads or writes because they are waiting for an OSD
75
with the most up-to-date data to come back.
77
**Unclean** PGs contain objects that are not replicated the desired number
78
of times. They should be recovering.
80
**Stale** PGs are in an unknown state - the OSDs that host them have not
81
reported to the monitor cluster in a while (configured by
82
mon_osd_report_timeout). ::
84
$ ceph pg <pgid> mark_unfound_lost revert
86
Revert "lost" objects to their prior state, either a previous version
87
or delete them if they were just created.
96
Query osd subsystem status. ::
98
$ ceph osd getmap -o file
100
Write a copy of the most recent osd map to a file. See
101
:doc:`osdmaptool </man/8/osdmaptool>`. ::
103
$ ceph osd getcrushmap -o file
105
Write a copy of the crush map from the most recent osd map to
106
file. This is functionally equivalent to ::
108
$ ceph osd getmap -o /tmp/osdmap
109
$ osdmaptool /tmp/osdmap --export-crush file
113
$ ceph osd dump [--format <format>]
115
Dump the osd map. Valid formats for -f are "plain" and "json". If no
116
--format option is given, the osd map is dumped as plain text. ::
118
$ ceph osd tree [--format <format>]
120
Dump the osd map as a tree with one line per osd containing weight
123
$ ceph osd crush add <id> <name> <weight> [<loc1> [<loc2> ...]]
125
Add a new item with the given id/name/weight at the specified
128
$ ceph osd crush remove <id>
130
Remove an existing item from the crush map. ::
132
$ ceph osd crush reweight <name> <weight>
134
Set the weight of the item given by ``<name>`` to ``<weight>``. ::
136
$ ceph osd cluster_snap <name>
138
Create a cluster snapshot. ::
140
$ ceph osd lost [--yes-i-really-mean-it]
142
Mark an OSD as lost. This may result in permanent data loss. Use with caution. ::
144
$ ceph osd create [<id>]
146
Create a new OSD. If no ID is given, a new ID is automatically selected
149
$ ceph osd rm [<id>...]
151
Remove the given OSD(s). ::
155
Query the current max_osd parameter in the osd map. ::
157
$ ceph osd setmap -i file
159
Import the given osd map. Note that this can be a bit dangerous,
160
since the osd map includes dynamic state about which OSDs are current
161
on or offline; only do this if you've just modified a (very) recent
164
$ ceph osd setcrushmap -i file
166
Import the given crush map. ::
170
Set the max_osd parameter in the osd map. This is necessary when
171
expanding the storage cluster. ::
179
Mark osdN out of the distribution (i.e. allocated no data). ::
183
Mark osdN in the distribution (i.e. allocated data). ::
187
List classes that are loaded in the ceph cluster. ::
192
Set or clear the pause flags in the OSD map. If set, no IO requests
193
will be sent to any OSD. Clearing the flags via unpause results in
194
resending pending requests. ::
196
$ ceph osd reweight N W
198
Set the weight of osdN to W. Two OSDs with the same weight will receive
199
roughly the same number of I/O requests and store approximately the
200
same amount of data. ::
202
$ ceph osd reweight-by-utilization [threshold]
204
Reweights all the OSDs by reducing the weight of OSDs which are
205
heavily overused. By default it will adjust the weights downward on
206
OSDs which have 120% of the average utilization, but if you include
207
threshold it will use that percentage instead. ::
209
$ ceph osd blacklist add ADDRESS[:source_port] [TIME]
210
$ ceph osd blacklist rm ADDRESS[:source_port]
212
Adds/removes the address to/from the blacklist. When adding an address,
213
you can specify how long it should be blacklisted in seconds; otherwise
214
it will default to 1 hour. A blacklisted address is prevented from
215
connecting to any osd. Blacklisting is most often used to prevent a
216
laggy mds making bad changes to data on the osds.
218
These commands are mostly only useful for failure testing, as
219
blacklists are normally maintained automatically and shouldn't need
220
manual intervention. ::
222
$ ceph osd pool mksnap POOL SNAPNAME
223
$ ceph osd pool rmsnap POOL SNAPNAME
225
Creates/deletes a snapshot of a pool. ::
227
$ ceph osd pool create POOL [pg_num [pgp_num]]
228
$ ceph osd pool delete POOL
229
$ ceph osd pool rename OLDNAME NEWNAME
231
Creates/deletes/renames a storage pool. ::
233
$ ceph osd pool set POOL FIELD VALUE
235
Changes a pool setting. Valid fields are:
237
* ``size``: Sets the number of copies of data in the pool.
238
* ``crash_replay_interval``: The number of seconds to allow
239
clients to replay acknowledged but uncommited requests.
240
* ``pg_num``: The placement group number.
241
* ``pgp_num``: Effective number when calculating pg placement.
242
* ``crush_ruleset``: rule number for mapping placement.
246
$ ceph osd pool get POOL FIELD
248
Get the value of a pool setting. Valid fields are:
250
* ``pg_num``: See above.
251
* ``pgp_num``: See above.
252
* ``lpg_num``: The number of local PGs.
253
* ``lpgp_num``: The number used for placing the local PGs.
259
Sends a scrub command to osdN. To send the command to all osds, use ``*``.
260
TODO: what does this actually do ::
264
Sends a repair command to osdN. To send the command to all osds, use ``*``.
265
TODO: what does this actually do
270
Change configuration parameters on a running mds. ::
272
$ ceph mds tell <mds-id> injectargs '--<switch> <value> [--<switch> <value>]'
276
$ ceph mds tell 0 injectargs '--debug_ms 1 --debug_mds 10'
278
Enables debug messages. ::
282
Displays the status of all metadata servers.
284
.. todo:: ``ceph mds`` subcommands missing docs: set_max_mds, dump, getmap, stop, setmap
293
2011-12-14 10:40:59.044395 mon <- [mon,stat]
294
2011-12-14 10:40:59.057111 mon.1 -> 'e3: 5 mons at {a=10.1.2.3:6789/0,b=10.1.2.4:6789/0,c=10.1.2.5:6789/0,d=10.1.2.6:6789/0,e=10.1.2.7:6789/0}, election epoch 16, quorum 0,1,2,3' (0)
296
The ``quorum`` list at the end lists monitor nodes that are part of the current quorum.
298
This is also available more directly::
300
$ ./ceph quorum_status
301
2011-12-14 10:44:20.417705 mon <- [quorum_status]
302
2011-12-14 10:44:20.431890 mon.0 -> '{ "election_epoch": 10,
307
"monmap": { "epoch": 1,
308
"fsid": "444b489c-4f16-4b75-83f0-cb8097468898",
309
"modified": "2011-12-12 13:28:27.505520",
310
"created": "2011-12-12 13:28:27.505520",
314
"addr": "127.0.0.1:6789\/0"},
317
"addr": "127.0.0.1:6790\/0"},
320
"addr": "127.0.0.1:6791\/0"}]}}' (0)
322
The above will block until a quorum is reached.
324
For a status of just the monitor you connect to (use ``-m HOST:PORT``
328
2011-12-14 10:45:30.644414 mon <- [mon_status]
329
2011-12-14 10:45:30.644632 mon.0 -> '{ "name": "a",
332
"election_epoch": 10,
337
"outside_quorum": [],
338
"monmap": { "epoch": 1,
339
"fsid": "444b489c-4f16-4b75-83f0-cb8097468898",
340
"modified": "2011-12-12 13:28:27.505520",
341
"created": "2011-12-12 13:28:27.505520",
345
"addr": "127.0.0.1:6789\/0"},
348
"addr": "127.0.0.1:6790\/0"},
351
"addr": "127.0.0.1:6791\/0"}]}}' (0)
353
A dump of the monitor state::
356
2011-12-14 10:43:08.015333 mon <- [mon,dump]
357
2011-12-14 10:43:08.015567 mon.0 -> 'dumped monmap epoch 1' (0)
359
fsid 444b489c-4f16-4b75-83f0-cb8097468898
360
last_changed 2011-12-12 13:28:27.505520
361
created 2011-12-12 13:28:27.505520
362
0: 127.0.0.1:6789/0 mon.a
363
1: 127.0.0.1:6790/0 mon.b
364
2: 127.0.0.1:6791/0 mon.c