3
:man manual: crmsh documentation
11
crm - Pacemaker command line interface for configuration and management
16
*crm* [-D output_type] [-f file] [-c cib] [-H hist_src] [-hFRDw] [--version] [args]
19
[[topics_Description,Program description]]
22
Pacemaker configuration is stored in a CIB file (Cluster
23
Information Base). The CIB is a set of instructions coded in XML.
24
Editing the CIB is a challenge, not only due to its complexity
25
and a wide variety of options, but also because XML is more
26
computer than user friendly. The `crm` shell alleviates this
27
issue significantly by introducing small and simple configuration
28
language. The CIB is translated into this language on the fly.
30
`crm` is also a management tool. For management tasks it relies
31
almost exclusively on other command line tools, such as
32
`crm_resource(8)` or `crm_attribute(8)`. Use of these programs
33
is, however, plagued by the notorious weakness common to all UNIX
34
tools: a multitude of options, necessary for operation and yet
35
very hard to remember. `crm` tries to present a consistent
36
interface to the user and to hide the arcane detail.
38
It may be used either as an interactive shell or for single
39
commands directly on the shell's command line. It is also
40
possible to feed it a set of commands from standard input or a
41
file, thus turning it into a scripting tool. Templates with ready
42
made configurations may help newbies learn about the cluster
43
configuration or facilitate testing procedures.
45
The `crm` shell is line oriented: every command must start and
46
finish on the same line. It is possible to use a continuation
47
character (`\`) to write one command in two or more lines. The
48
continuation character is commonly used when displaying
54
Load commands from the given file. If the file is `-` then
58
Start the session with the given shadow CIB file.
59
Equivalent to `cib use`.
61
*-D, --display=*'OUTPUT_TYPE'::
62
Choose one of the output options: `plain`, `color`, or
63
`uppercase`. The default is `color` if the terminal emulation
64
supports colors. Otherwise, `plain` is used.
67
Make `crm` proceed with doing changes even though it would
68
normally ask user to confirm some of them. Mostly useful in
72
Make `crm` wait for the cluster transition to finish (for the
73
changes to take effect) after each processed line.
75
*-H, --history*='DIR|FILE'::
76
The `history` commands can examine either live cluster
77
(default) or a report generated by `hb_report`. Use this
78
option to specify a directory or file containing the report.
84
Print crmsh version and build information (Mercurial Hg
87
*-R, --regression-tests*::
88
Run in the regression test mode. Used mainly by the
89
regression testing suite.
92
Print some debug information. Used by developers. [Not yet
93
refined enough to print useful information for other users.]
95
[[topics_Introduction,Introduction to the user interface]]
96
== Introduction to the user interface
98
Arguably the most important aspect of `crm` is the user
99
interface. We begin with an informal introduction so that the
100
reader may get acquainted with it and get a general feeling of
101
the tool. It is probably best just to give some examples:
103
1. Command line (one-shot) use:
105
# crm resource stop www_app
111
crm(live)resource# unmanage tetris_1
112
crm(live)resource# end
113
crm(live)# node standby node4
115
3. Cluster configuration:
121
primitive disk0 iscsi \
122
params portal=192.168.2.108:3260 target=iqn.2008-07.com.suse:disk0
123
primitive fs0 Filesystem \
124
params device=/dev/disk/by-label/disk0 directory=/disk0 fstype=ext3
125
primitive internal_ip IPaddr params ip=192.168.1.101
126
primitive apache apache \
127
params configfile=/disk0/etc/apache2/site0.conf
128
primitive apcfence stonith:apcsmart \
129
params ttydev=/dev/ttyS0 hostlist="node1 node2" \
131
primitive pingd pingd \
132
params name=pingd dampen=5s multiplier=100 host_list="r1 r2"
134
# monitor apache and the UPS
136
monitor apache 60s:30s
137
monitor apcfence 120m:60s
142
disk0 fs0 internal_ip apache
143
clone fence apcfence \
144
meta globally-unique=false clone-max=2 clone-node-max=1
146
meta globally-unique=false clone-max=2 clone-node-max=1
147
location node_pref internal_www \
148
rule 50: #uname eq node1 \
149
rule pingd: defined pingd
153
property stonith-enabled=true
157
If you've ever done a CRM style configuration, you should be able
158
to understand the above examples without much difficulties. The
159
shell should provide a means to manage the cluster efficiently or
160
put together a configuration in a concise manner.
162
The `(live)` string in the prompt signifies that the current CIB
163
in use is the cluster live configuration. It is also possible to
164
work with the so-called shadow CIBs, i.e. configurations which
165
are stored in files and aren't active, but may be applied at any
168
Since the CIB is hierarchical such is the interface too. There
169
are several levels and entering each of them enables the user to
170
use a certain set of commands.
172
[[topics_Shadows,Shadow CIB usage]]
175
Shadow CIB is a normal cluster configuration stored in a file.
176
They may be manipulated in the same way like the _live_ CIB, but
177
these changes have no effect on the cluster resources. The
178
administrator may choose to apply any of them to the cluster,
179
thus replacing the running configuration with the one which is in
180
the shadow CIB. The `crm` prompt always contains the name of the
181
configuration which is currently in use or string _live_ if we
182
are using the current cluster configuration.
184
At the configure level no changes take place before the `commit`
185
command. Sometimes though, the administrator may start working
186
with the running configuration, but change mind and instead of
187
committing the changes to the cluster save them to a shadow CIB.
188
This short `configure` session excerpt shows how:
190
crm(live)configure# cib new test-2
191
INFO: test-2 shadow CIB created
192
crm(test-2)configure# commit
195
[[topics_Templates,Configuration templates]]
196
== Configuration templates
198
Configuration templates are ready made configurations created by
199
cluster experts. They are designed in such a way so that users
200
may generate valid cluster configurations with minimum effort.
201
If you are new to Pacemaker, templates may be the best way to
204
We will show here how to create a simple yet functional Apache
208
crm(live)configure# template
209
crm(live)configure template# list templates
210
apache filesystem virtual-ip
211
crm(live)configure template# new web <TAB><TAB>
212
apache filesystem virtual-ip
213
crm(live)configure template# new web apache
214
INFO: pulling in template apache
215
INFO: pulling in template virtual-ip
216
crm(live)configure template# list
217
web2-d web2 vip2 web3 vip web
220
We enter the `template` level from `configure`. Use the `list`
221
command to show templates available on the system. The `new`
222
command creates a configuration from the `apache` template. You
223
can use tab completion to pick templates. Note that the apache
224
template depends on a virtual IP address which is automatically
225
pulled along. The `list` command shows the just created `web`
226
configuration, among other configurations (I hope that you,
227
unlike me, will use more sensible and descriptive names).
229
The `show` command, which displays the resulting configuration,
230
may be used to get an idea about the minimum required changes
231
which have to be done. All `ERROR` messages show the line numbers
232
in which the respective parameters are to be defined:
234
crm(live)configure template# show
235
ERROR: 23: required parameter ip not set
236
ERROR: 61: required parameter id not set
237
ERROR: 65: required parameter configfile not set
238
crm(live)configure template# edit
241
The `edit` command invokes the preferred text editor with the
242
`web` configuration. At the top of the file, the user is advised
243
how to make changes. A good template should require from the user
244
to specify only parameters. For example, the `web` configuration
245
we created above has the following required and optional
246
parameters (all parameter lines start with `%%`):
248
$ grep -n ^%% ~/.crmconf/web
258
These lines are the only ones that should be modified. Simply
259
append the parameter value at the end of the line. For instance,
260
after editing this template, the result could look like this (we
261
used tabs instead of spaces to make the values stand out):
263
$ grep -n ^%% ~/.crmconf/web
264
23:%% ip 192.168.1.101
268
65:%% configfile /etc/apache2/httpd.conf
273
As you can see, the parameter line format is very simple:
278
After editing the file, use `show` again to display the
281
crm(live)configure template# show
282
primitive virtual-ip ocf:heartbeat:IPaddr \
283
params ip="192.168.1.101"
284
primitive apache ocf:heartbeat:apache \
285
params configfile="/etc/apache2/httpd.conf"
286
monitor apache 120s:60s
291
The target resource of the apache template is a group which we
292
named `websvc` in this sample session.
294
This configuration looks exactly as you could type it at the
295
`configure` level. The point of templates is to save you some
296
typing. It is important, however, to understand the configuration
299
Finally, the configuration may be applied to the current
300
crm configuration (note how the configuration changed slightly,
301
though it is still equivalent, after being digested at the
304
crm(live)configure template# apply
305
crm(live)configure template# cd ..
306
crm(live)configure# show
309
primitive apache ocf:heartbeat:apache \
310
params configfile="/etc/apache2/httpd.conf" \
311
op monitor interval="120s" timeout="60s"
312
primitive virtual-ip ocf:heartbeat:IPaddr \
313
params ip="192.168.1.101"
314
group websvc apache virtual-ip
317
Note that this still does not commit the configuration to the CIB
318
which is used in the shell, either the running one (`live`) or
319
some shadow CIB. For that you still need to execute the `commit`
322
To complete our example, we should also define the preferred node
325
crm(live)configure# location websvc-pref websvc 100: xen-b
328
If you are not happy with some resource names which are provided
329
by default, you can rename them now:
331
crm(live)configure# rename virtual-ip intranet-ip
332
crm(live)configure# show
335
primitive apache ocf:heartbeat:apache \
336
params configfile="/etc/apache2/httpd.conf" \
337
op monitor interval="120s" timeout="60s"
338
primitive intranet-ip ocf:heartbeat:IPaddr \
339
params ip="192.168.1.101"
340
group websvc apache intranet-ip
341
location websvc-pref websvc 100: xen-b
344
To summarize, working with templates typically consists of the
347
- `new`: create a new configuration from templates
348
- `edit`: define parameters, at least the required ones
349
- `show`: see if the configuration is valid
350
- `apply`: apply the configuration to the `configure` level
352
[[topics_Testing,Resource testing]]
355
The amount of detail in a cluster makes all configurations prone
356
to errors. By far the largest number of issues in a cluster is
357
due to bad resource configuration. The shell can help quickly
358
diagnose such problems. And considerably reduce your keyboard
361
Let's say that we entered the following configuration:
366
primitive fencer stonith:external/libvirt \
367
params hypervisor_uri="qemu+tcp://10.2.13.1/system" \
368
hostlist="xen-b xen-c xen-d" \
369
op monitor interval="2h"
370
primitive svc ocf:heartbeat:Xinetd \
371
params service="systat" \
372
op monitor interval="30s"
373
primitive intranet-ip ocf:heartbeat:IPaddr2 \
374
params ip="10.2.13.100" \
375
op monitor interval="30s"
376
primitive apache ocf:heartbeat:apache \
377
params configfile="/etc/apache2/httpd.conf" \
378
op monitor interval="120s" timeout="60s"
379
group websvc apache intranet-ip
380
location websvc-pref websvc 100: xen-b
383
Before typing `commit` to submit the configuration to the cib we
384
can make sure that all resources are usable on all nodes:
386
crm(live)configure# rsctest websvc svc fencer
389
It is important that resources being tested are not running on
390
any nodes. Otherwise, the `rsctest` command will refuse to do
391
anything. Of course, if the current configuration resides in a
392
CIB shadow, then a `commit` is irrelevant. The point being that
393
resources are not running on any node.
395
.Note on stopping all resources
396
****************************
397
Alternatively to not committing a configuration, it is also
398
possible to tell Pacemaker not to start any resources:
401
crm(live)configure# property stop-all-resources="yes"
403
Almost none---resources of class stonith are still started. But
404
shell is not as strict when it comes to stonith resources.
405
****************************
407
Order of resources is significant insofar that a resource depends
408
on all resources to its left. In most configurations, it's
409
probably practical to test resources in several runs, based on
412
Apart from groups, `crm` does not interpret constraints and
413
therefore knows nothing about resource dependencies. It also
414
doesn't know if a resource can run on a node at all in case of an
415
asymmetric cluster. It is up to the user to specify a list of
416
eligible nodes if a resource is not meant to run on every node.
418
[[topics_Completion,Tab completion]]
421
The `crm` makes extensive use of tab completion. The completion
422
is both static (i.e. for `crm` commands) and dynamic. The latter
423
takes into account the current status of the cluster or
424
information from installed resource agents. Sometimes, completion
425
may also be used to get short help on resource parameters. Here a
429
crm(live)resource# <TAB><TAB>
430
bye failcount move restart unmigrate
431
cd help param show unmove
432
cleanup list promote start up
433
demote manage quit status utilization
434
end meta refresh stop
435
exit migrate reprobe unmanage
436
crm(live)resource# end
438
crm(live)configure# primitive fence-1 <TAB><TAB>
439
heartbeat: lsb: ocf: stonith:
440
crm(live)configure# primitive fence-1 stonith:<TAB><TAB>
441
apcmaster external/ippower9258 fence_legacy
442
apcmastersnmp external/kdumpcheck ibmhmc
443
apcsmart external/libvirt ipmilan
444
baytech external/nut meatware
445
bladehpi external/rackpdu null
446
cyclades external/riloe nw_rpc100s
447
drac3 external/sbd rcd_serial
448
external/drac5 external/ssh rps10
449
external/dracmc-telnet external/ssh-bad ssh
450
external/hmchttp external/ssh-slow suicide
451
external/ibmrsa external/vmware wti_mpc
452
external/ibmrsa-telnet external/xen0 wti_nps
453
external/ipmi external/xen0-ha
454
crm(live)configure# primitive fence-1 stonith:ipmilan params <TAB><TAB>
455
auth= hostname= ipaddr= login= password= port= priv=
456
crm(live)configure# primitive fence-1 stonith:ipmilan params auth=<TAB><TAB>
458
The authorization type of the IPMI session ("none", "straight", "md2", or "md5")
459
crm(live)configure# primitive fence-1 stonith:ipmilan params auth=
462
[[topics_Checks,Configuration semantic checks]]
463
== Configuration semantic checks
465
Resource definitions may be checked against the meta-data
466
provided with the resource agents. These checks are currently
469
- are required parameters set
470
- existence of defined parameters
471
- timeout values for operations
473
The parameter checks are obvious and need no further explanation.
474
Failures in these checks are treated as configuration errors.
476
The timeouts for operations should be at least as long as those
477
recommended in the meta-data. Too short timeout values are a
478
common mistake in cluster configurations and, even worse, they
479
often slip through if cluster testing was not thorough. Though
480
operation timeouts issues are treated as warnings, make sure that
481
the timeouts are usable in your environment. Note also that the
482
values given are just _advisory minimum_---your resources may
483
require longer timeouts.
485
User may tune the frequency of checks and the treatment of errors
486
by the <<cmdhelp_options_check-frequency,`check-frequency`>> and
487
<<cmdhelp_options_check-mode,`check-mode`>> preferences.
489
Note that if the `check-frequency` is set to `always` and the
490
`check-mode` to `strict`, errors are not tolerated and such
491
configuration cannot be saved.
493
[[topics_Security,Access Control Lists (ACL)]]
494
== Access Control Lists (ACL)
496
By default, the users from the `haclient` group have full access
497
to the cluster (or, more precisely, to the CIB). Access control
498
lists allow for finer access control to the cluster.
500
Access control lists consist of an ordered set of access rules.
501
Each rule allows read or write access or denies access
502
completely. Rules are typically combined to produce a specific
503
role. Then, users may be assigned a role.
505
For instance, this is a role which defines a set of rules
506
allowing management of a single resource:
510
write meta:bigdb:target-role \
511
write meta:bigdb:is-managed \
512
write location:bigdb \
516
The first two rules allow modifying the `target-role` and
517
`is-managed` meta attributes which effectively enables users in
518
this role to stop/start and manage/unmanage the resource. The
519
constraints write access rule allows moving the resource around.
520
Finally, the user is granted read access to the resource
523
For proper operation of all Pacemaker programs, it is advisable
524
to add the following role to all users:
531
For finer grained read access try with the rules listed in the
536
read node attribute:uname \
537
read node attribute:type \
542
It is however possible that some Pacemaker programs (e.g.
543
`ptest`) may not function correctly if the whole CIB is not
546
Some of the ACL rules in the examples above are expanded by the
547
shell to XPath specifications. For instance,
548
`meta:bigdb:target-role` is a shortcut for
549
`//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']`.
550
You can see the expansion by showing XML:
553
crm(live) configure# show xml bigdb_admin
556
<acl_role id="bigdb_admin">
557
<write id="bigdb_admin-write"
558
xpath="//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']"/>
561
Many different XPath expressions can have equal meaning. For
562
instance, the following two are equal, but only the first one is
563
going to be recognized as shortcut:
566
//primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']
567
//resources/primitive[@id='bigdb']/meta_attributes/nvpair[@name='target-role']
570
XPath is a powerful language, but you should try to keep your ACL
571
xpaths simple and the builtin shortcuts should be used whenever
574
[[topics_Reference,Command reference]]
577
We define a small and simple language. Most commands consist of
578
just a list of simple tokens. The only complex constructs are
579
found at the `configure` level.
581
The syntax is described in a somewhat informal manner: `<>`
582
denotes a string, `[]` means that the construct is optional, the
583
ellipsis (`...`) signifies that the previous construct may be
584
repeated, `|` means pick one of many, and the rest are literals
589
Show cluster status. The status is displayed by `crm_mon`. Supply
590
additional arguments for more information or different format.
591
See `crm_mon(8)` for more details.
595
status [<option> ...]
597
option :: bynode | inactive | ops | timing | failcounts
600
[[cmdhelp_cib,CIB shadow management]]
601
=== `cib` (shadow CIBs)
603
This level is for management of shadow CIBs. It is available both
604
at the top level and the `configure` level.
606
All the commands are implemented using `cib_shadow(8)` and the
607
`CIB_shadow` environment variable. The user prompt always
608
includes the name of the currently active shadow or the live CIB.
610
[[cmdhelp_cib_new,create a new shadow CIB]]
613
Create a new shadow CIB. The live cluster configuration and
614
status is copied to the shadow CIB. Specify `withstatus` if you
615
want to edit the status section of the shadow CIB (see the
616
<<cmdhelp_cibstatus,cibstatus section>>). Add `force` to force overwriting the
619
To start with an empty configuration that is not copied from the live
620
CIB, specify the `empty` keyword. (This also allows a shadow CIB to be
621
created in case no cluster is running.)
625
new <cib> [withstatus] [force] [empty]
628
[[cmdhelp_cib_delete,delete a shadow CIB]]
631
Delete an existing shadow CIB.
638
[[cmdhelp_cib_reset,copy live cib to a shadow CIB]]
641
Copy the current cluster configuration into the shadow CIB.
648
[[cmdhelp_cib_commit,copy a shadow CIB to the cluster]]
651
Apply a shadow CIB to the cluster.
658
[[cmdhelp_cib_use,change working CIB]]
661
Choose a CIB source. If you want to edit the status from the
662
shadow CIB specify `withstatus` (see <<cmdhelp_cibstatus,`cibstatus`>>).
663
Leave out the CIB name to switch to the running CIB.
667
use [<cib>] [withstatus]
670
[[cmdhelp_cib_diff,diff between the shadow CIB and the live CIB]]
673
Print differences between the current cluster configuration and
674
the active shadow CIB.
681
[[cmdhelp_cib_list,list all shadow CIBs]]
684
List existing shadow CIBs.
691
[[cmdhelp_cib_import,import a CIB or PE input file to a shadow]]
694
At times it may be useful to create a shadow file from the
695
existing CIB. The CIB may be specified as file or as a PE input
696
file number. The shell will look up files in the local directory
697
first and then in the PE directory (typically `/var/lib/pengine`).
698
Once the CIB file is found, it is copied to a shadow and this
699
shadow is immediately available for use at both `configure` and
702
If the shadow name is omitted then the target shadow is named
703
after the input CIB file.
705
Note that there are often more than one PE input file, so you may
706
need to specify the full name.
710
import {<file>|<number>} [<shadow>]
718
[[cmdhelp_cib_cibstatus,CIB status management and editing]]
721
Enter edit and manage the CIB status section level. See the
722
<<cmdhelp_cibstatus,CIB status management section>>.
724
[[cmdhelp_ra,Resource Agents (RA) lists and documentation]]
727
This level contains commands which show various information about
728
the installed resource agents. It is available both at the top
729
level and at the `configure` level.
731
[[cmdhelp_ra_classes,list classes and providers]]
734
Print all resource agents' classes and, where appropriate, a list
735
of available providers.
742
[[cmdhelp_ra_list,list RA for a class (and provider)]]
745
List available resource agents for the given class. If the class
746
is `ocf`, supply a provider to get agents which are available
747
only from that provider.
751
list <class> [<provider>]
758
[[cmdhelp_ra_meta,show meta data for a RA]]
761
Show the meta-data of a resource agent type. This is where users
762
can find information on how to use a resource agent. It is also
763
possible to get information from some programs: `pengine`,
764
`crmd`, `cib`, and `stonithd`. Just specify the program name
769
info [<class>:[<provider>:]]<type>
770
info <type> <class> [<provider>] (obsolete)
775
info ocf:pacemaker:Dummy
780
[[cmdhelp_ra_providers,show providers for a RA and a class]]
783
List providers for a resource agent type. The class parameter
788
providers <type> [<class>]
795
[[cmdhelp_resource,Resource management]]
798
At this level resources may be managed.
800
All (or almost all) commands are implemented with the CRM tools
801
such as `crm_resource(8)`.
803
[[cmdhelp_resource_status,show status of resources]]
804
==== `status` (`show`, `list`)
806
Print resource status. If the resource parameter is left out
807
status of all resources is printed.
814
[[cmdhelp_resource_start,start a resource]]
817
Start a resource by setting the `target-role` attribute. If there
818
are multiple meta attributes sets, the attribute is set in all of
819
them. If the resource is a clone, all `target-role` attributes
820
are removed from the children resources.
822
For details on group management see <<cmdhelp_options_manage-children,`options manage-children`>>.
829
[[cmdhelp_resource_stop,stop a resource]]
832
Stop a resource using the `target-role` attribute. If there
833
are multiple meta attributes sets, the attribute is set in all of
834
them. If the resource is a clone, all `target-role` attributes
835
are removed from the children resources.
837
For details on group management see <<cmdhelp_options_manage-children,`options manage-children`>>.
844
[[cmdhelp_resource_restart,restart a resource]]
847
Restart a resource. This is essentially a shortcut for resource
848
stop followed by a start. The shell is first going to wait for
849
the stop to finish, that is for all resources to really stop, and
850
only then to order the start action. Due to this command
851
entailing a whole set of operations, informational messages are
852
printed to let the user see some progress.
854
For details on group management see <<cmdhelp_options_manage-children,`options manage-children`>>.
862
# crm resource restart g_webserver
863
INFO: ordering g_webserver to stop
864
waiting for stop to finish .... done
865
INFO: ordering g_webserver to start
869
[[cmdhelp_resource_promote,promote a master-slave resource]]
872
Promote a master-slave resource using the `target-role`
880
[[cmdhelp_resource_demote,demote a master-slave resource]]
883
Demote a master-slave resource using the `target-role`
891
[[cmdhelp_resource_manage,put a resource into managed mode]]
894
Manage a resource using the `is-managed` attribute. If there
895
are multiple meta attributes sets, the attribute is set in all of
896
them. If the resource is a clone, all `is-managed` attributes are
897
removed from the children resources.
899
For details on group management see <<cmdhelp_options_manage-children,`options manage-children`>>.
906
[[cmdhelp_resource_unmanage,put a resource into unmanaged mode]]
909
Unmanage a resource using the `is-managed` attribute. If there
910
are multiple meta attributes sets, the attribute is set in all of
911
them. If the resource is a clone, all `is-managed` attributes are
912
removed from the children resources.
914
For details on group management see <<cmdhelp_options_manage-children,`options manage-children`>>.
921
[[cmdhelp_resource_migrate,migrate a resource to another node]]
922
==== `migrate` (`move`)
924
Migrate a resource to a different node. If node is left out, the
925
resource is migrated by creating a constraint which prevents it from
926
running on the current node. Additionally, you may specify a
927
lifetime for the constraint---once it expires, the location
928
constraint will no longer be active.
932
migrate <rsc> [<node>] [<lifetime>] [force]
935
[[cmdhelp_resource_unmigrate,unmigrate a resource to another node]]
936
==== `unmigrate` (`unmove`)
938
Remove the constraint generated by the previous migrate command.
945
[[cmdhelp_resource_param,manage a parameter of a resource]]
948
Show/edit/delete a parameter of a resource.
952
param <rsc> set <param> <value>
953
param <rsc> delete <param>
954
param <rsc> show <param>
961
[[cmdhelp_resource_secret,manage sensitive parameters]]
964
Sensitive parameters can be kept in local files rather than CIB
965
in order to prevent accidental data exposure. Use the `secret`
966
command to manage such parameters. `stash` and `unstash` move the
967
value from the CIB and back to the CIB respectively. The `set`
968
subcommand sets the parameter to the provided value. `delete`
969
removes the parameter completely. `show` displays the value of
970
the parameter from the local file. Use `check` to verify if the
971
local file content is valid.
975
secret <rsc> set <param> <value>
976
secret <rsc> stash <param>
977
secret <rsc> unstash <param>
978
secret <rsc> delete <param>
979
secret <rsc> show <param>
980
secret <rsc> check <param>
984
secret fence_1 show password
985
secret fence_1 stash password
986
secret fence_1 set password secret_value
989
[[cmdhelp_resource_meta,manage a meta attribute]]
992
Show/edit/delete a meta attribute of a resource. Currently, all
993
meta attributes of a resource may be managed with other commands
994
such as `resource stop`.
998
meta <rsc> set <attr> <value>
999
meta <rsc> delete <attr>
1000
meta <rsc> show <attr>
1004
meta ip_0 set target-role stopped
1007
[[cmdhelp_resource_utilization,manage a utilization attribute]]
1010
Show/edit/delete a utilization attribute of a resource. These
1011
attributes describe hardware requirements. By setting the
1012
`placement-strategy` cluster property appropriately, it is
1013
possible then to distribute resources based on resource
1014
requirements and node size. See also <<cmdhelp_node_utilization,node utilization attributes>>.
1018
utilization <rsc> set <attr> <value>
1019
utilization <rsc> delete <attr>
1020
utilization <rsc> show <attr>
1024
utilization xen1 set memory 4096
1027
[[cmdhelp_resource_failcount,manage failcounts]]
1030
Show/edit/delete the failcount of a resource.
1034
failcount <rsc> set <node> <value>
1035
failcount <rsc> delete <node>
1036
failcount <rsc> show <node>
1040
failcount fs_0 delete node2
1043
[[cmdhelp_resource_cleanup,cleanup resource status]]
1046
Cleanup resource status. Typically done after the resource has
1047
temporarily failed. If a node is omitted, cleanup on all nodes.
1048
If there are many nodes, the command may take a while.
1052
cleanup <rsc> [<node>]
1055
[[cmdhelp_resource_refresh,refresh CIB from the LRM status]]
1058
Refresh CIB from the LRM status.
1065
[[cmdhelp_resource_reprobe,probe for resources not started by the CRM]]
1068
Probe for resources not started by the CRM.
1075
[[cmdhelp_resource_trace,start RA tracing]]
1078
Start tracing RA for the given operation. The trace files are
1079
stored in `$HA_VARLIB/trace_ra`. If the operation to be traced is
1080
monitor, note that the number of trace files can grow very
1085
trace <rsc> <op> [<interval>]
1092
[[cmdhelp_resource_untrace,stop RA tracing]]
1095
Stop tracing RA for the given operation.
1099
untrace <rsc> <op> [<interval>]
1106
[[cmdhelp_node,Nodes management]]
1109
Node management and status commands.
1111
[[cmdhelp_node_status,show nodes' status as XML]]
1114
Show nodes' status as XML. If the node parameter is omitted then
1115
all nodes are shown.
1122
[[cmdhelp_node_show,show node]]
1125
Show a node definition. If the node parameter is omitted then all
1133
[[cmdhelp_node_standby,put node into standby]]
1136
Set a node to standby status. The node parameter defaults to the
1137
node where the command is run. Additionally, you may specify a
1138
lifetime for the standby---if set to `reboot`, the node will be
1139
back online once it reboots. `forever` will keep the node in
1140
standby after reboot.
1144
standby [<node>] [<lifetime>]
1146
lifetime :: reboot | forever
1149
[[cmdhelp_node_online,set node online]]
1152
Set a node to online status. The node parameter
1153
defaults to the node where the command is run.
1160
[[cmdhelp_node_maintenance,put node into maintenance mode]]
1163
Set the node status to maintenance. This is equivalent to the
1164
cluster-wide `maintenance-mode` property but puts just one node
1165
into the maintenance mode. The node parameter defaults to the
1166
node where the command is run.
1170
maintenance [<node>]
1173
[[cmdhelp_node_ready,put node into ready mode]]
1176
Set the node's maintenance status to `off`. The node should be
1177
now again fully operational and capable of running resource
1185
[[cmdhelp_node_fence,fence node]]
1188
Make CRM fence a node. This functionality depends on stonith
1189
resources capable of fencing the specified node. No such stonith
1190
resources, no fencing will happen.
1197
[[cmdhelp_node_clearstate,Clear node state]]
1198
==== `clearnodestate`
1200
Resets and clears the state of the specified node. This node is
1201
afterwards assumed clean and offline. This command can be used to
1202
manually confirm that a node has been fenced (e.g., powered off).
1204
Be careful! This can cause data corruption if you confirm that a node is
1205
down that is, in fact, not cleanly down - the cluster will proceed as if
1206
the fence had succeeded, possibly starting resources multiple times.
1213
[[cmdhelp_node_delete,delete node]]
1216
Delete a node. This command will remove the node from the CIB
1217
and, in case the heartbeat stack is running, run hb_delnode too.
1224
[[cmdhelp_node_attribute,manage attributes]]
1227
Edit node attributes. This kind of attribute should refer to
1228
relatively static properties, such as memory size.
1232
attribute <node> set <attr> <value>
1233
attribute <node> delete <attr>
1234
attribute <node> show <attr>
1238
attribute node_1 set memory_size 4096
1241
[[cmdhelp_node_utilization,manage utilization attributes]]
1244
Edit node utilization attributes. These attributes describe
1245
hardware characteristics as integer numbers such as memory size
1246
or the number of CPUs. By setting the `placement-strategy`
1247
cluster property appropriately, it is possible then to distribute
1248
resources based on resource requirements and node size. See also
1249
<<cmdhelp_resource_utilization,resource utilization attributes>>.
1253
utilization <node> set <attr> <value>
1254
utilization <node> delete <attr>
1255
utilization <node> show <attr>
1259
utilization node_1 set memory 16384
1260
utilization node_1 show cpu
1263
[[cmdhelp_node_status-attr,manage status attributes]]
1266
Edit node attributes which are in the CIB status section, i.e.
1267
attributes which hold properties of a more volatile nature. One
1268
typical example is attribute generated by the `pingd` utility.
1272
status-attr <node> set <attr> <value>
1273
status-attr <node> delete <attr>
1274
status-attr <node> show <attr>
1278
status-attr node_1 show pingd
1281
[[cmdhelp_site,site support]]
1284
A cluster may consist of two or more subclusters in different and
1285
distant locations. This set of commands supports such setups.
1287
[[cmdhelp_site_ticket,manage site tickets]]
1290
Tickets are cluster-wide attributes. They can be managed at the
1291
site where this command is executed.
1293
It is then possible to constrain resources depending on the
1294
ticket availability (see the <<cmdhelp_configure_rsc_ticket,`rsc_ticket`>> command
1299
ticket {grant|revoke|standby|activate|show|time|delete} <ticket>
1303
ticket grant ticket1
1306
[[cmdhelp_options,user preferences]]
1309
The user may set various options for the crm shell itself.
1311
[[cmdhelp_options_skill-level,set skill level]]
1314
Based on the skill-level setting, the user is allowed to use only
1315
a subset of commands. There are three levels: operator,
1316
administrator, and expert. The operator level allows only
1317
commands at the `resource` and `node` levels, but not editing
1318
or deleting resources. The administrator may do that and may also
1319
configure the cluster at the `configure` level and manage the
1320
shadow CIBs. The expert may do all.
1326
level :: operator | administrator | expert
1330
****************************
1331
The `skill-level` option is advisory only. There is nothing
1332
stopping any users change their skill level (see
1333
<<topics_Security,Access Control Lists (ACL)>> on how to enforce
1335
****************************
1337
[[cmdhelp_options_user,set the cluster user]]
1340
Sufficient privileges are necessary in order to manage a
1341
cluster: programs such as `crm_verify` or `crm_resource` and,
1342
ultimately, `cibadmin` have to be run either as `root` or as the
1343
CRM owner user (typically `hacluster`). You don't have to worry
1344
about that if you run `crm` as `root`. A more secure way is to
1345
run the program with your usual privileges, set this option to
1346
the appropriate user (such as `hacluster`), and setup the
1358
[[cmdhelp_options_editor,set preferred editor program]]
1361
The `edit` command invokes an editor. Use this to specify your
1362
preferred editor program. If not set, it will default to either
1363
the value of the `EDITOR` environment variable or to one of the
1364
standard UNIX editors (`vi`,`emacs`,`nano`).
1375
[[cmdhelp_options_pager,set preferred pager program]]
1378
The `view` command displays text through a pager. Use this to
1379
specify your preferred pager program. If not set, it will default
1380
to either the value of the `PAGER` environment variable or to one
1381
of the standard UNIX system pagers (`less`,`more`,`pg`).
1383
[[cmdhelp_options_sort-elements,sort CIB elements]]
1384
==== `sort-elements`
1386
`crm` by default sorts CIB elements. If you want them appear in
1387
the order they were created, set this option to `no`.
1391
sort-elements {yes|no}
1398
[[cmdhelp_options_wait,synchronous operation]]
1401
In normal operation, `crm` runs a command and gets back
1402
immediately to process other commands or get input from the user.
1403
With this option set to `yes` it will wait for the started
1404
transition to finish. In interactive mode dots are printed to
1416
[[cmdhelp_options_output,set output type]]
1419
`crm` can adorn configurations in two ways: in color (similar to
1420
for instance the `ls --color` command) and by showing keywords in
1421
upper case. Possible values are `plain`, `color`, and
1422
'uppercase'. It is possible to combine the latter two in order to
1423
get an upper case xmass tree. Just set this option to
1426
[[cmdhelp_options_colorscheme,set colors for output]]
1429
With `output` set to `color`, a comma separated list of colors
1430
from this option are used to emphasize:
1437
- resource references
1439
`crm` can show colors only if there is curses support for python
1440
installed (usually provided by the `python-curses` package). The
1441
colors are whatever is available in your terminal. Use `normal`
1442
if you want to keep the default foreground color.
1444
This user preference defaults to
1445
`yellow,normal,cyan,red,green,magenta` which is good for
1446
terminals with dark background. You may want to change the color
1447
scheme and save it in the preferences file for other color
1452
colorscheme yellow,normal,blue,red,green,magenta
1455
[[cmdhelp_options_check-frequency,when to perform semantic check]]
1456
==== `check-frequency`
1458
Semantic check of the CIB or elements modified or created may be
1459
done on every configuration change (`always`), when verifying
1460
(`on-verify`) or `never`. It is by default set to `always`.
1461
Experts may want to change the setting to `on-verify`.
1463
The checks require that resource agents are present. If they are
1464
not installed at the configuration time set this preference to
1467
See <<topics_Checks,Configuration semantic checks>> for more details.
1469
[[cmdhelp_options_check-mode,how to treat semantic errors]]
1472
Semantic check of the CIB or elements modified or created may be
1473
done in the `strict` mode or in the `relaxed` mode. In the former
1474
certain problems are treated as configuration errors. In the
1475
`relaxed` mode all are treated as warnings. The default is `strict`.
1477
See <<topics_Checks,Configuration semantic checks>> for more details.
1479
[[cmdhelp_options_add-quotes,add quotes around parameters containing spaces]]
1482
The shell (as in `/bin/sh`) parser strips quotes from the command
1483
line. This may sometimes make it really difficult to type values
1484
which contain white space. One typical example is the configure
1485
filter command. The crm shell will supply extra quotes around
1486
arguments which contain white space. The default is `yes`.
1489
****************************
1490
Adding quotes around arguments automatically has been introduced
1491
with version 1.2.2 and it is technically a regression. Being a
1492
regression is the only reason the `add-quotes` option exists. If
1493
you have custom shell scripts which would break, just set the
1494
`add-quotes` option to `no`.
1496
For instance, with adding quotes enabled, it is possible to do
1499
# crm configure primitive d1 ocf:heartbeat:Dummy meta description="some description here"
1500
# crm configure filter 'sed "s/hostlist=./&node-c /"' fencing
1502
****************************
1504
[[cmdhelp_options_manage-children,how to handle children resource attributes]]
1505
==== `manage-children`
1507
Some resource management commands, such as `resource stop`, when
1508
the target resource is a group, may not always produce desired
1509
result. Each element, group and the primitive members, can have a
1510
meta attribute and those attributes may end up with conflicting
1511
values. Consider the following construct:
1513
crm(live)# configure show svc fs virtual-ip
1514
primitive fs ocf:heartbeat:Filesystem \
1515
params device="/dev/drbd0" directory="/srv/nfs" fstype="ext3" \
1516
op monitor interval="10s" \
1517
meta target-role="Started"
1518
primitive virtual-ip ocf:heartbeat:IPaddr2 \
1519
params ip="10.2.13.110" iflabel="1" \
1520
op monitor interval="10s" \
1521
op start interval="0" \
1522
meta target-role="Started"
1523
group svc fs virtual-ip \
1524
meta target-role="Stopped"
1527
Even though the element `svc` should be stopped, the group is
1528
actually running because all its members have the `target-role`
1531
crm(live)# resource show svc
1532
resource svc is running on: xen-f
1535
Hence, if the user invokes `resource stop svc` the intention is
1536
not clear. This preference gives the user an opportunity to
1537
better control what happens if attributes of group members have
1538
values which are in conflict with the same attribute of the group
1541
Possible values are `ask` (the default), `always`, and `never`.
1542
If set to `always`, the crm shell removes all children attributes
1543
which have values different from the parent. If set to `never`,
1544
all children attributes are left intact. Finally, if set to
1545
`ask`, the user will be asked for each member what is to be done.
1547
[[cmdhelp_options_show,show current user preference]]
1550
Display all current settings.
1552
[[cmdhelp_options_save,save the user preferences to the rc file]]
1555
Save current settings to the rc file (`$HOME/.config/crm/rc`). On
1556
further `crm` runs, the rc file is automatically read and parsed.
1558
[[cmdhelp_configure,CIB configuration]]
1561
This level enables all CIB object definition commands.
1563
The configuration may be logically divided into four parts:
1564
nodes, resources, constraints, and (cluster) properties and
1565
attributes. Each of these commands support one or more basic CIB
1568
Nodes and attributes describing nodes are managed using the
1571
Commands for resources are:
1577
- `ms`/`master` (master-slave)
1579
In order to streamline large configurations, it is possible to
1580
define a template which can later be referenced in primitives:
1584
In that case the primitive inherits all attributes defined in the
1587
There are three types of constraints:
1593
It is possible to define fencing order (stonith resource
1596
- `fencing_topology`
1598
Finally, there are the cluster properties, resource meta
1599
attributes defaults, and operations defaults. All are just a set
1600
of attributes. These attributes are managed by the following
1607
In addition to the cluster configuration, the Access Control
1608
Lists (ACL) can be setup to allow access to parts of the CIB for
1609
users other than `root` and `hacluster`. The following commands
1615
The changes are applied to the current CIB only on ending the
1616
configuration session or using the `commit` command.
1618
Comments start with `#` in the first line. The comments are tied
1619
to the element which follows. If the element moves, its comments
1622
[[cmdhelp_configure_node,define a cluster node]]
1625
The node command describes a cluster node. Nodes in the CIB are
1626
commonly created automatically by the CRM. Hence, you should not
1627
need to deal with nodes unless you also want to define node
1628
attributes. Note that it is also possible to manage node
1629
attributes at the `node` level.
1633
node <uname>[:<type>]
1634
[attributes <param>=<value> [<param>=<value>...]]
1635
[utilization <param>=<value> [<param>=<value>...]]
1637
type :: normal | member | ping
1642
node big_node attributes memory=64
1645
[[cmdhelp_configure_primitive,define a resource]]
1648
The primitive command describes a resource. It may be referenced
1649
only once in group, clone, or master-slave objects. If it's not
1650
referenced, then it is placed as a single resource in the CIB.
1652
Operations may be specified in three ways. "Anonymous" as a
1653
simple list of "op" specifications. Use that if you don't want to
1654
reference the set of operations elsewhere. That's by far the most
1655
common way to define operations. If reusing operation sets is
1656
desired, use the "operations" keyword along with the id to give
1657
the operations set a name and the id-ref to reference another set
1660
Operation's attributes which are not recognized are saved as
1661
instance attributes of that operation. A typical example is
1664
For multistate resources, roles are specified as `role=<role>`.
1666
A template may be defined for resources which are of the same
1667
type and which share most of the configuration. See
1668
<<cmdhelp_configure_rsc_template,`rsc_template`>> for more information.
1672
primitive <rsc> {[<class>:[<provider>:]]<type>|@<template>}
1675
[utilization attr_list]
1676
[operations id_spec]
1677
[op op_type [<attribute>=<value>...] ...]
1679
attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
1680
id_spec :: $id=<id> | $id-ref=<id>
1681
op_type :: start | stop | monitor
1685
primitive apcfence stonith:apcsmart \
1686
params ttydev=/dev/ttyS0 hostlist="node1 node2" \
1687
op start timeout=60s \
1688
op monitor interval=30m timeout=60s
1690
primitive www8 apache \
1691
params configfile=/etc/apache/www8.conf \
1692
operations $id-ref=apache_ops
1694
primitive db0 mysql \
1695
params config=/etc/mysql/db0.conf \
1696
op monitor interval=60s \
1697
op monitor interval=300s OCF_CHECK_LEVEL=10
1699
primitive r0 ocf:linbit:drbd \
1700
params drbd_resource=r0 \
1701
op monitor role=Master interval=60s \
1702
op monitor role=Slave interval=300s
1704
primitive xen0 @vm_scheme1 \
1705
params xmfile=/etc/xen/vm/xen0
1708
[[cmdhelp_configure_monitor,add monitor operation to a primitive]]
1711
Monitor is by far the most common operation. It is possible to
1712
add it without editing the whole resource. Also, long primitive
1713
definitions may be a bit uncluttered. In order to make this
1714
command as concise as possible, less common operation attributes
1715
are not available. If you need them, then use the `op` part of
1716
the `primitive` command.
1720
monitor <rsc>[:<role>] <interval>[:<timeout>]
1724
monitor apcfence 60m:60s
1727
Note that after executing the command, the monitor operation may
1728
be shown as part of the primitive definition.
1730
[[cmdhelp_configure_group,define a group]]
1733
The `group` command creates a group of resources.
1737
group <name> <rsc> [<rsc>...]
1741
attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
1745
group internal_www disk0 fs0 internal_ip apache \
1746
meta target_role=stopped
1749
[[cmdhelp_configure_clone,define a clone]]
1752
The `clone` command creates a resource clone. It may contain a
1753
single primitive resource or one group of resources.
1761
attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
1765
clone cl_fence apc_1 \
1766
meta clone-node-max=1 globally-unique=false
1769
[[cmdhelp_configure_ms,define a master-slave resource]]
1770
==== `ms` (`master`)
1772
The `ms` command creates a master/slave resource type. It may contain a
1773
single primitive resource or one group of resources.
1781
attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
1786
meta notify=true globally-unique=false
1789
.Note on `id-ref` usage
1790
****************************
1791
Instance or meta attributes (`params` and `meta`) may contain
1792
a reference to another set of attributes. In that case, no other
1793
attributes are allowed. Since attribute sets' ids, though they do
1794
exist, are not shown in the `crm`, it is also possible to
1795
reference an object instead of an attribute set. `crm` will
1796
automatically replace such a reference with the right id:
1799
crm(live)configure# primitive a2 www-2 meta $id-ref=a1
1800
crm(live)configure# show a2
1801
primitive a2 ocf:heartbeat:apache \
1802
meta $id-ref="a1-meta_attributes"
1805
It is advisable to give meaningful names to attribute sets which
1806
are going to be referenced.
1807
****************************
1809
[[cmdhelp_configure_rsc_template,define a resource template]]
1812
The `rsc_template` command creates a resource template. It may be
1813
referenced in primitives. It is used to reduce large
1814
configurations with many similar resources.
1818
rsc_template <name> [<class>:[<provider>:]]<type>
1821
[utilization attr_list]
1822
[operations id_spec]
1823
[op op_type [<attribute>=<value>...] ...]
1825
attr_list :: [$id=<id>] <attr>=<val> [<attr>=<val>...] | $id-ref=<id>
1826
id_spec :: $id=<id> | $id-ref=<id>
1827
op_type :: start | stop | monitor
1831
rsc_template public_vm ocf:heartbeat:Xen \
1832
op start timeout=300s \
1833
op stop timeout=300s \
1834
op monitor interval=30s timeout=60s \
1835
op migrate_from timeout=600s \
1836
op migrate_to timeout=600s
1837
primitive xen0 @public_vm \
1838
params xmfile=/etc/xen/xen0
1839
primitive xen1 @public_vm \
1840
params xmfile=/etc/xen/xen1
1843
[[cmdhelp_configure_location,a location preference]]
1846
`location` defines the preference of nodes for the given
1847
resource. The location constraints consist of one or more rules
1848
which specify a score to be awarded if the rule matches.
1852
location <id> <rsc> {node_pref|rules}
1854
node_pref :: <score>: <node>
1857
rule [id_spec] [$role=<role>] <score>: <expression>
1858
[rule [id_spec] [$role=<role>] <score>: <expression> ...]
1860
id_spec :: $id=<id> | $id-ref=<id>
1861
score :: <number> | <attribute> | [-]inf
1862
expression :: <simple_exp> [bool_op <simple_exp> ...]
1864
simple_exp :: <attribute> [type:]<binary_op> <value>
1865
| <unary_op> <attribute>
1867
type :: string | version | number
1868
binary_op :: lt | gt | lte | gte | eq | ne
1869
unary_op :: defined | not_defined
1871
date_expr :: lt <end>
1873
| in_range start=<start> end=<end>
1874
| in_range start=<start> <duration>
1875
| date_spec <date_spec>
1876
duration|date_spec ::
1889
location conn_1 internal_www 100: node1
1891
location conn_1 internal_www \
1892
rule 50: #uname eq node1 \
1893
rule pingd: defined pingd
1895
location conn_2 dummy_float \
1896
rule -inf: not_defined pingd or pingd number:lte 0
1899
[[cmdhelp_configure_colocation,colocate resources]]
1900
==== `colocation` (`collocation`)
1902
This constraint expresses the placement relation between two
1903
or more resources. If there are more than two resources, then the
1904
constraint is called a resource set.
1906
Collocation resource sets have an extra attribute (`sequential`)
1907
to allow for sets of resources which don't depend on each other
1908
in terms of state. The shell syntax for such sets is to put
1909
resources in parentheses.
1911
Sets cannot be nested.
1913
The optional `node-attribute` references an attribute in nodes'
1914
instance attributes.
1918
colocation <id> <score>: <rsc>[:<role>] <rsc>[:<role>] ...
1919
[node-attribute=<node_attr>]
1923
colocation dummy_and_apache -inf: apache dummy
1924
colocation c1 inf: A ( B C )
1927
[[cmdhelp_configure_order,order resources]]
1930
This constraint expresses the order of actions on two resources
1931
or more resources. If there are more than two resources, then the
1932
constraint is called a resource set.
1934
Ordered resource sets have an extra attribute to allow for sets
1935
of resources whose actions may run in parallel. The shell syntax
1936
for such sets is to put resources in parentheses.
1938
If the subsequent resource can start or promote after any one of the
1939
resources in a set has done, enclose the set in brackets (`[` and `]`).
1941
Sets cannot be nested.
1943
Three strings are reserved to specify a kind of order constraint:
1944
`Mandatory`, `Optional`, and `Serialize`. It is preferred to use
1945
one of these settings instead of score. Previous versions mapped
1946
scores `0` and `inf` to keywords `advisory` and `mandatory`.
1947
That is still valid but deprecated.
1949
.Note on resource sets' XML attributes
1950
****************************
1951
The XML attribute `require-all` controls whether all resources in
1952
a set are, well, required. The bracketed sets actually have this
1953
attribute as well as `sequential` set to `false`. If you need a
1954
different combination, for whatever reason, just set one of the
1955
attributes within the set. Something like this:
1958
crm(live)configure# order o1 Mandatory: [ A B sequential=true ] C
1960
It is up to you to find out whether such a combination makes
1962
****************************
1966
order <id> {kind|<score>}: <rsc>[:<action>] <rsc>[:<action>] ...
1967
[symmetrical=<bool>]
1969
kind :: Mandatory | Optional | Serialize
1973
order c_apache_1 Mandatory: apache:start ip_1
1974
order o1 Serialize: A ( B C )
1975
order order_2 Mandatory: [ A B ] C
1978
[[cmdhelp_configure_rsc_ticket,resources ticket dependency]]
1981
This constraint expresses dependency of resources on cluster-wide
1982
attributes, also known as tickets. Tickets are mainly used in
1983
geo-clusters, which consist of multiple sites. A ticket may be
1984
granted to a site, thus allowing resources to run there.
1986
The `loss-policy` attribute specifies what happens to the
1987
resource (or resources) if the ticket is revoked. The default is
1988
either `stop` or `demote` depending on whether a resource is
1991
See also the <<cmdhelp_site_ticket,`site`>> set of commands.
1995
rsc_ticket <id> <ticket_id>: <rsc>[:<role>] [<rsc>[:<role>] ...]
1996
[loss-policy=<loss_policy_action>]
1998
loss_policy_action :: stop | demote | fence | freeze
2002
rsc_ticket ticket-A_public-ip ticket-A: public-ip
2003
rsc_ticket ticket-A_bigdb ticket-A: bigdb loss-policy=fence
2004
rsc_ticket ticket-B_storage ticket-B: drbd-a:Master drbd-b:Master
2008
[[cmdhelp_configure_property,set a cluster property]]
2011
Set the cluster (`crm_config`) options.
2015
property [$id=<set_id>] <option>=<value> [<option>=<value> ...]
2019
property stonith-enabled=true
2022
[[cmdhelp_configure_rsc_defaults,set resource defaults]]
2025
Set defaults for the resource meta attributes.
2029
rsc_defaults [$id=<set_id>] <option>=<value> [<option>=<value> ...]
2033
rsc_defaults failure-timeout=3m
2036
[[cmdhelp_configure_fencing_topology,node fencing order]]
2037
==== `fencing_topology`
2039
If multiple fencing (stonith) devices are available capable of
2040
fencing a node, their order may be specified by `fencing_topology`.
2041
The order is specified per node.
2043
Stonith resources can be separated by `,` in which case all of
2044
them need to succeed. If they fail, the next stonith resource (or
2045
set of resources) is used. In other words, use comma to separate
2046
resources which all need to succeed and whitespace for serial
2047
order. It is not allowed to use whitespace around comma.
2049
If the node is left out, the order is used for all nodes.
2050
That should reduce the configuration size in some stonith setups.
2054
fencing_topology stonith_resources [stonith_resources ...]
2055
fencing_topology fencing_order [fencing_order ...]
2057
fencing_order :: <node>: stonith_resources [stonith_resources ...]
2059
stonith_resources :: <rsc>[,<rsc>...]
2063
fencing_topology poison-pill power
2065
node-a: poison-pill power
2069
[[cmdhelp_configure_role,define role access rights]]
2072
An ACL role is a set of rules which describe access rights to
2073
CIB. Rules consist of an access right `read`, `write`, or `deny`
2074
and a specification denoting part of the configuration to which
2075
the access right applies. The specification can be an XPath or a
2076
combination of tag and id references. If an attribute is
2077
appended, then the specification applies only to that attribute
2078
of the matching element.
2080
There is a number of shortcuts for XPath specifications. The
2081
`meta,` `params`, and `utilization` shortcuts reference resource
2082
meta attributes, parameters, and utilization respectively. The
2083
`location` may be used to specify location constraints most of
2084
the time to allow resource `move` and `unmove` commands. The
2085
`property` references cluster properties. The `node` allows
2086
reading node attributes. `nodeattr` and `nodeutil` reference node
2087
attributes and node capacity (utilization). The `status` shortcut
2088
references the whole status section of the CIB. Read access to
2089
status is necessary for various monitoring tools such as
2090
`crm_mon(8)` (aka `crm status`).
2094
role <role-id> rule [rule ...]
2096
rule :: acl-right cib-spec [attribute:<attribute>]
2098
acl-right :: read | write | deny
2100
cib-spec :: xpath-spec | tag-ref-spec
2101
xpath-spec :: xpath:<xpath> | shortcut
2102
tag-ref-spec :: tag:<tag> | ref:<id> | tag:<tag> ref:<id>
2104
shortcut :: meta:<rsc>[:<attr>]
2105
params:<rsc>[:<attr>]
2117
write meta:app1:target-role \
2118
write meta:app1:is-managed \
2119
write location:app1 \
2123
[[cmdhelp_configure_user,define user access rights]]
2126
Users which normally cannot view or manage cluster configuration
2127
can be allowed access to parts of the CIB. The access is defined
2128
by a set of `read`, `write`, and `deny` rules as in role
2129
definitions or by referencing roles. The latter is considered
2134
user <uid> {roles|rules}
2136
roles :: role:<role-ref> [role:<role-ref> ...]
2137
rules :: rule [rule ...]
2146
[[cmdhelp_configure_op_defaults,set resource operations defaults]]
2149
Set defaults for the operations meta attributes.
2153
op_defaults [$id=<set_id>] <option>=<value> [<option>=<value> ...]
2157
op_defaults record-pending=true
2160
[[cmdhelp_configure_schema,set or display current CIB RNG schema]]
2163
CIB's content is validated by a RNG schema. Pacemaker supports
2164
several, depending on version. Currently supported schemas are
2165
`pacemaker-1.0`, `pacemaker-1.1`, and `pacemaker-1.2`.
2167
Use this command to display or switch to another RNG schema.
2175
schema pacemaker-1.1
2178
[[cmdhelp_configure_show,display CIB objects]]
2181
The `show` command displays objects. It may display all objects
2182
or a set of objects. The user may also choose to see only objects
2184
Optionally, the XML code may be displayed instead of the CLI
2189
show [xml] [<id> ...]
2193
[[cmdhelp_configure_edit,edit CIB objects]]
2196
This command invokes the editor with the object description. As
2197
with the `show` command, the user may choose to edit all objects
2198
or a set of objects.
2200
If the user insists, he or she may edit the XML edition of the
2201
object. If you do that, don't modify any id attributes.
2205
edit [xml] [<id> ...]
2209
.Note on renaming element ids
2210
****************************
2211
The edit command sometimes cannot properly handle modifying
2212
element ids. In particular for elements which belong to group or
2213
ms resources. Group and ms resources themselves also cannot be
2214
renamed. Please use the `rename` command instead.
2215
****************************
2217
[[cmdhelp_configure_filter,filter CIB objects]]
2220
This command filters the given CIB elements through an external
2221
program. The program should accept input on `stdin` and send
2222
output to `stdout` (the standard UNIX filter conventions). As
2223
with the `show` command, the user may choose to filter all or
2224
just a subset of elements.
2226
It is possible to filter the XML representation of objects, but
2227
probably not as useful as the configuration language. The
2228
presentation is somewhat different from what would be displayed
2229
by the `show` command---each element is shown on a single line,
2230
i.e. there are no backslashes and no other embelishments.
2232
Don't forget to put quotes around the filter if it contains
2237
filter <prog> [xml] [<id> ...]
2238
filter <prog> [xml] changed
2242
filter "sed '/^primitive/s/target-role=[^ ]*//'"
2243
# crm configure filter "sed '/^primitive/s/target-role=[^ ]*//'"
2246
[[cmdhelp_configure_delete,delete CIB objects]]
2249
Delete one or more objects. If an object to be deleted belongs to
2250
a container object, such as a group, and it is the only resource
2251
in that container, then the container is deleted as well. Any
2252
related constraints are removed as well.
2256
delete <id> [<id>...]
2259
[[cmdhelp_configure_default-timeouts,set timeouts for operations to minimums from the meta-data]]
2260
==== `default-timeouts`
2262
This command takes the timeouts from the actions section of the
2263
resource agent meta-data and sets them for the operations of the
2268
default-timeouts <id> [<id>...]
2271
.Note on `default-timeouts`
2272
****************************
2273
You may be happy using this, but your applications may not. And
2274
it will tell you so at the worst possible moment. You have been
2276
****************************
2278
[[cmdhelp_configure_rename,rename a CIB object]]
2281
Rename an object. It is recommended to use this command to rename
2282
a resource, because it will take care of updating all related
2283
constraints and a parent resource. Changing ids with the edit
2284
command won't have the same effect.
2286
If you want to rename a resource, it must be in the stopped state.
2290
rename <old_id> <new_id>
2293
[[cmdhelp_configure_modgroup,modify group]]
2296
Add or remove primitives in a group. The `add` subcommand appends
2297
the new group member by default. Should it go elsewhere, there
2298
are `after` and `before` clauses.
2302
modgroup <id> add <id> [after <id>|before <id>]
2303
modgroup <id> remove <id>
2307
modgroup share1 add storage2 before share1-fs
2310
[[cmdhelp_configure_refresh,refresh from CIB]]
2313
Refresh the internal structures from the CIB. All changes made
2314
during this session are lost.
2321
[[cmdhelp_configure_erase,erase the CIB]]
2324
The `erase` clears all configuration. Apart from nodes. To remove
2325
nodes, you have to specify an additional keyword `nodes`.
2327
Note that removing nodes from the live cluster may have some
2328
strange/interesting/unwelcome effects.
2335
[[cmdhelp_configure_ptest,show cluster actions if changes were committed]]
2338
Show PE (Policy Engine) motions using `ptest(8)`.
2340
A CIB is constructed using the current user edited configuration
2341
and the status from the running CIB. The resulting CIB is run
2342
through `ptest` to show changes which would happen if the
2343
configuration is committed.
2345
The status section may be loaded from another source and modified
2346
using the <<cmdhelp_cibstatus,`cibstatus`>> level commands. In that case, the
2347
`ptest` command will issue a message informing the user that the
2348
Policy Engine graph is not calculated based on the current status
2349
section and therefore won't show what would happen to the
2350
running but some imaginary cluster.
2352
If you have graphviz installed and X11 session, `dotty(1)` is run
2353
to display the changes graphically.
2355
Add a string of `v` characters to increase verbosity. `ptest`
2356
can also show allocation scores. `utilization` turns on
2357
information about the remaining capacity of nodes. With the
2358
`actions` option, `ptest` will print all resource actions.
2362
ptest [nograph] [v...] [scores] [actions] [utilization]
2370
[[cmdhelp_configure_rsctest,test resources as currently configured]]
2373
Test resources with current resource configuration. If no nodes
2374
are specified, tests are run on all known nodes.
2376
The order of resources is significant: it is assumed that later
2377
resources depend on earlier ones.
2379
If a resource is multi-state, it is assumed that the role on
2380
which later resources depend is master.
2382
Tests are run sequentially to prevent running the same resource
2383
on two or more nodes. Tests are carried out only if none of the
2384
specified nodes currently run any of the specified resources.
2385
However, it won't verify whether resources run on the other
2388
Superuser privileges are obviously required: either run this as
2389
root or setup the `sudoers` file appropriately.
2391
Note that resource testing may take some time.
2395
rsctest <rsc_id> [<rsc_id> ...] [<node_id> ...]
2399
rsctest my_ip websvc
2400
rsctest websvc nodeB
2403
[[cmdhelp_configure_cib,CIB shadow management]]
2404
=== `cib` (shadow CIBs)
2406
This level is for management of shadow CIBs. It is available at
2407
the `configure` level to enable saving intermediate changes to a
2408
shadow CIB instead of to the live cluster. This short excerpt
2411
crm(live)configure# cib new test-2
2412
INFO: test-2 shadow CIB created
2413
crm(test-2)configure# commit
2415
Note how the current CIB in the prompt changed from `live` to
2416
`test-2` after issuing the `cib new` command. See also the
2417
<<cmdhelp_cib,CIB shadow management>> for more information.
2419
[[cmdhelp_configure_cibstatus,CIB status management and editing]]
2422
Enter edit and manage the CIB status section level. See the
2423
<<cmdhelp_cibstatus,CIB status management section>>.
2425
[[cmdhelp_configure_template,edit and import a configuration from a template]]
2428
The specified template is loaded into the editor. It's up to the
2429
user to make a good CRM configuration out of it. See also the
2430
<<cmdhelp_template,template section>>.
2438
template two-apaches.txt
2441
[[cmdhelp_configure_commit,commit the changes to the CIB]]
2444
Commit the current configuration to the CIB in use. As noted
2445
elsewhere, commands in a configure session don't have immediate
2446
effect on the CIB. All changes are applied at one point in time,
2447
either using `commit` or when the user leaves the configure
2448
level. In case the CIB in use changed in the meantime, presumably
2449
by somebody else, the crm shell will refuse to apply the changes.
2450
If you know that it's fine to still apply them add `force`.
2457
[[cmdhelp_configure_verify,verify the CIB with crm_verify]]
2460
Verify the contents of the CIB which would be committed.
2467
[[cmdhelp_configure_upgrade,upgrade the CIB to version 1.0]]
2470
If you get the `CIB not supported` error, which typically means
2471
that the current CIB version is coming from the older release,
2472
you may try to upgrade it to the latest revision. The command
2473
to perform the upgrade is:
2475
# cibadmin --upgrade --force
2478
If we don't recognize the current CIB as the old one, but you're
2479
sure that it is, you may force the command.
2486
[[cmdhelp_configure_save,save the CIB to a file]]
2489
Save the current configuration to a file. Optionally, as XML. Use
2490
`-` instead of file name to write the output to `stdout`.
2501
[[cmdhelp_configure_load,import the CIB from a file]]
2504
Load a part of configuration (or all of it) from a local file or
2505
a network URL. The `replace` method replaces the current
2506
configuration with the one from the source. The `update` tries to
2507
import the contents into the current configuration.
2508
The file may be a CLI file or an XML file.
2512
load [xml] <method> URL
2514
method :: replace | update
2518
load xml update myfirstcib.xml
2519
load xml replace http://storage.big.com/cibs/bigcib.xml
2522
[[cmdhelp_configure_graph,generate a directed graph]]
2525
Create a graphviz graphical layout from the current cluster
2528
Currently, only `dot` (directed graph) is supported. It is
2529
essentially a visualization of resource ordering.
2531
The graph may be saved to a file which can be used as source for
2532
various graphviz tools (by default it is displayed in the user's
2533
X11 session). Optionally, by specifying the format, one can also
2534
produce an image instead.
2536
For more or different graphviz attributes, it is possible to save
2537
the default set of attributes to an ini file. If this file exists
2538
it will always override the builtin settings. The `exportsettings`
2539
subcommand also prints the location of the ini file.
2543
graph [<gtype> [<file> [<img_format>]]]
2544
graph exportsettings
2547
img_format :: `dot` output format (see the `-T` option)
2552
graph dot clu1.conf.dot
2553
graph dot clu1.conf.svg svg
2556
[[cmdhelp_configure_xml,raw xml]]
2559
Even though we promissed no xml, it may happen, but hopefully
2560
very very seldom, that an element from the CIB cannot be rendered
2561
in the configuration language. In that case, the element will be
2562
shown as raw xml, prefixed by this command. That element can then
2563
be edited like any other. If the shell finds out that after the
2564
change it can digest it, then it is going to be converted into
2565
the normal configuration language. Otherwise, there is no need to
2566
use `xml` for configuration.
2573
[[cmdhelp_template,edit and import a configuration from a template]]
2576
User may be assisted in the cluster configuration by templates
2577
prepared in advance. Templates consist of a typical ready
2578
configuration which may be edited to suit particular user needs.
2580
This command enters a template level where additional commands
2581
for configuration/template management are available.
2583
[[cmdhelp_template_new,create a new configuration from templates]]
2586
Create a new configuration from one or more templates. Note that
2587
configurations and templates are kept in different places, so it
2588
is possible to have a configuration name equal a template name.
2590
If you already know which parameters are required, you can set
2591
them directly on the command line.
2593
The parameter name `id` is set by default to the name of the
2598
new <config> <template> [<template> ...] [params name=value ...]"
2603
new bigfs ocfs2 params device=/dev/sdx8 directory=/bigfs
2606
[[cmdhelp_template_load,load a configuration]]
2609
Load an existing configuration. Further `edit`, `show`, and
2610
`apply` commands will refer to this configuration.
2617
[[cmdhelp_template_edit,edit a configuration]]
2620
Edit current or given configuration using your favourite editor.
2627
[[cmdhelp_template_delete,delete a configuration]]
2630
Remove a configuration. The loaded (active) configuration may be
2635
delete <config> [force]
2638
[[cmdhelp_template_list,list configurations/templates]]
2641
List existing configurations or templates.
2648
[[cmdhelp_template_apply,process and apply the current configuration to the current CIB]]
2651
Copy the current or given configuration to the current CIB. By
2652
default, the CIB is replaced, unless the method is set to
2657
apply [<method>] [<config>]
2659
method :: replace | update
2662
[[cmdhelp_template_show,show the processed configuration]]
2665
Process the current or given configuration and display the result.
2672
[[cmdhelp_cibstatus,CIB status management and editing]]
2675
The `status` section of the CIB keeps the current status of nodes
2676
and resources. It is modified _only_ on events, i.e. when some
2677
resource operation is run or node status changes. For obvious
2678
reasons, the CRM has no user interface with which it is possible
2679
to affect the status section. From the user's point of view, the
2680
status section is essentially a read-only part of the CIB. The
2681
current status is never even written to disk, though it is
2682
available in the PE (Policy Engine) input files which represent
2683
the history of cluster motions. The current status may be read
2684
using the `cibadmin -Q` command.
2686
It may sometimes be of interest to see how status changes would
2687
affect the Policy Engine. The set of `cibstatus` level commands
2688
allow the user to load status sections from various sources and
2689
then insert or modify resource operations or change nodes' state.
2691
The effect of those changes may then be observed by running the
2692
<<cmdhelp_configure_ptest,`ptest`>> command at the `configure` level
2693
or `simulate` and `run` commands at this level. The `ptest`
2694
runs with the user edited CIB whereas the latter two commands
2695
run with the CIB which was loaded along with the status section.
2697
The `simulate` and `run` commands as well as all status
2698
modification commands are implemented using `crm_simulate(8)`.
2700
[[cmdhelp_cibstatus_load,load the CIB status section]]
2703
Load a status section from a file, a shadow CIB, or the running
2704
cluster. By default, the current (`live`) status section is
2705
modified. Note that if the `live` status section is modified it
2706
is not going to be updated if the cluster status changes, because
2707
that would overwrite the user changes. To make `crm` drop changes
2708
and resume use of the running cluster status, run `load live`.
2710
All CIB shadow configurations contain the status section which is
2711
a snapshot of the status section taken at the time the shadow was
2712
created. Obviously, this status section doesn't have much to do
2713
with the running cluster status, unless the shadow CIB has just
2714
been created. Therefore, the `ptest` command by default uses the
2715
running cluster status section.
2719
load {<file>|shadow:<cib>|live}
2727
[[cmdhelp_cibstatus_save,save the CIB status section]]
2730
The current internal status section with whatever modifications
2731
were performed can be saved to a file or shadow CIB.
2733
If the file exists and contains a complete CIB, only the status
2734
section is going to be replaced and the rest of the CIB will
2735
remain intact. Otherwise, the current user edited configuration
2736
is saved along with the status section.
2738
Note that all modifications are saved in the source file as soon
2743
save [<file>|shadow:<cib>]
2750
[[cmdhelp_cibstatus_origin,display origin of the CIB status section]]
2753
Show the origin of the status section currently in use. This
2754
essentially shows the latest `load` argument.
2761
[[cmdhelp_cibstatus_show,show CIB status section]]
2764
Show the current status section in the XML format. Brace yourself
2765
for some unreadable output. Add `changed` option to get a human
2766
readable output of all changes.
2773
[[cmdhelp_cibstatus_node,change node status]]
2776
Change the node status. It is possible to throw a node out of
2777
the cluster, make it a member, or set its state to unclean.
2779
`online`:: Set the `node_state` `crmd` attribute to `online`
2780
and the `expected` and `join` attributes to `member`. The effect
2781
is that the node becomes a cluster member.
2783
`offline`:: Set the `node_state` `crmd` attribute to `offline`
2784
and the `expected` attribute to empty. This makes the node
2785
cleanly removed from the cluster.
2787
`unclean`:: Set the `node_state` `crmd` attribute to `offline`
2788
and the `expected` attribute to `member`. In this case the node
2789
has unexpectedly disappeared.
2793
node <node> {online|offline|unclean}
2800
[[cmdhelp_cibstatus_op,edit outcome of a resource operation]]
2803
Edit the outcome of a resource operation. This way you can
2804
tell CRM that it ran an operation and that the resource agent
2805
returned certain exit code. It is also possible to change the
2806
operation's status. In case the operation status is set to
2807
something other than `done`, the exit code is effectively
2812
op <operation> <resource> <exit_code> [<op_status>] [<node>]
2814
operation :: probe | monitor[:<n>] | start | stop |
2815
promote | demote | notify | migrate_to | migrate_from
2816
exit_code :: <rc> | success | generic | args |
2817
unimplemented | perm | installed | configured | not_running |
2818
master | failed_master
2819
op_status :: pending | done | cancelled | timeout | notsupported | error
2821
n :: the monitor interval in seconds; if omitted, the first
2822
recurring operation is referenced
2823
rc :: numeric exit code in range 0..9
2827
op start d1 xen-b generic
2829
op monitor d1 xen-b not_running
2830
op stop d1 xen-b 0 timeout
2833
[[cmdhelp_cibstatus_quorum,set the quorum]]
2836
Set the quorum value.
2847
[[cmdhelp_cibstatus_ticket,manage tickets]]
2850
Modify the ticket status. Tickets can be granted and revoked.
2851
Granted tickets could be activated or put in standby.
2855
ticket <ticket> {grant|revoke|activate|standby}
2859
ticket ticketA grant
2862
[[cmdhelp_cibstatus_run,run policy engine]]
2865
Run the policy engine with the edited status section.
2867
Add a string of `v` characters to increase verbosity. Specify
2868
`scores` to see allocation scores also. `utilization` turns on
2869
information about the remaining capacity of nodes.
2871
If you have graphviz installed and X11 session, `dotty(1)` is run
2872
to display the changes graphically.
2876
run [nograph] [v...] [scores] [utilization]
2883
[[cmdhelp_cibstatus_simulate,simulate cluster transition]]
2886
Run the policy engine with the edited status section and simulate
2889
Add a string of `v` characters to increase verbosity. Specify
2890
`scores` to see allocation scores also. `utilization` turns on
2891
information about the remaining capacity of nodes.
2893
If you have graphviz installed and X11 session, `dotty(1)` is run
2894
to display the changes graphically.
2898
simulate [nograph] [v...] [scores] [utilization]
2905
[[cmdhelp_history,cluster history]]
2908
Examining Pacemaker's history is a particularly involved task.
2909
The number of subsystems to be considered, the complexity of the
2910
configuration, and the set of various information sources, most
2911
of which are not exactly human readable, keep analyzing resource
2912
or node problems accessible to only the most knowledgeable. Or,
2913
depending on the point of view, to the most persistent. The
2914
following set of commands has been devised in hope to make
2915
cluster history more accessible.
2917
Of course, looking at _all_ history could be time consuming
2918
regardless of how good tools at hand are. Therefore, one should
2919
first say which period he or she wants to analyze. If not
2920
otherwise specified, the last hour is considered. Logs and other
2921
relevant information is collected using `hb_report`. Since this
2922
process takes some time and we always need fresh logs,
2923
information is refreshed in a much faster way using `pssh(1)`. If
2924
`python-pssh` is not found on the system, examining live cluster
2925
is still possible though not as comfortable.
2927
Apart from examining live cluster, events may be retrieved from a
2928
report generated by `hb_report` (see also the `-H` option). In
2929
that case we assume that the period stretching the whole report
2930
needs to be investigated. Of course, it is still possible to
2931
further reduce the time range.
2933
If you think you may have found a bug or just need clarification
2934
from developers or your support, the `session pack` command can
2935
help create a report. This is an example:
2937
crm(live)history# timeframe "Jul 18 12:00" "Jul 18 12:30"
2938
crm(live)history# session save strange_restart
2939
crm(live)history# session pack
2940
Report saved in .../strange_restart.tar.bz2
2943
In order to reduce report size and allow developers to
2944
concentrate on the issue, you should beforehand limit the time
2945
frame. Giving a meaningful session name helps too.
2949
The `info` command shows most important information about the
2961
[[cmdhelp_history_latest,show latest news from the cluster]]
2964
The `latest` command shows a bit of recent history, more
2965
precisely whatever happened since the last cluster change (the
2966
latest transition). If the transition is running, the shell will
2967
first wait until it finishes.
2978
[[cmdhelp_history_limit,limit timeframe to be examined]]
2979
==== `limit` (`timeframe`)
2981
All history commands look at events within certain period. It
2982
defaults to the last hour for the live cluster source. There is
2983
no limit for the `hb_report` source. Use this command to set the
2986
The time period is parsed by the dateutil python module. It
2987
covers wide range of date formats. For instance:
2989
- 3:00 (today at 3am)
2990
- 15:00 (today at 3pm)
2991
- 2010/9/1 2pm (September 1st 2010 at 2pm)
2993
We won't bother to give definition of the time specification in
2994
usage below. Either use common sense or read the
2995
http://labix.org/python-dateutil[dateutil] documentation.
2997
If dateutil is not available, then the time is parsed using
2998
strptime and only the kind as printed by `date(1)` is allowed:
3000
- Tue Sep 15 20:46:27 CEST 2010
3004
limit [<from_time> [<to_time>]]
3010
limit "Sun 5 20:46" "Sun 5 22:00"
3013
[[cmdhelp_history_source,set source to be examined]]
3016
Events to be examined can come from the current cluster or from a
3017
`hb_report` report. This command sets the source. `source live`
3018
sets source to the running cluster and system logs. If no source
3019
is specified, the current source information is printed.
3021
In case a report source is specified as a file reference, the file
3022
is going to be unpacked in place where it resides. This directory
3023
is not removed on exit.
3027
source [<dir>|<file>|live]
3032
source /tmp/customer_case_22.tar.bz2
3033
source /tmp/customer_case_22
3037
[[cmdhelp_history_refresh,refresh live report]]
3040
This command makes sense only for the `live` source and makes
3041
`crm` collect the latest logs and other relevant information from
3042
the logs. If you want to make a completely new report, specify
3050
[[cmdhelp_history_detail,set the level of detail shown]]
3053
How much detail to show from the logs.
3057
detail <detail_level>
3059
detail_level :: small integer (defaults to 0)
3066
[[cmdhelp_history_setnodes,set the list of cluster nodes]]
3069
In case the host this program runs on is not part of the cluster,
3070
it is necessary to set the list of nodes.
3074
setnodes node <node> [<node> ...]
3078
setnodes node_a node_b
3081
[[cmdhelp_history_resource,resource events]]
3084
Show actions and any failures that happened on all specified
3085
resources on all nodes. Normally, one gives resource names as
3086
arguments, but it is also possible to use extended regular
3087
expressions. Note that neither groups nor clones or master/slave
3088
names are ever logged. The resource command is going to expand
3089
all of these appropriately, so that clone instances or resources
3090
which are part of a group are shown.
3094
resource <rsc> [<rsc> ...]
3098
resource bigdb public_ip
3103
[[cmdhelp_history_node,node events]]
3106
Show important events that happened on a node. Important events
3107
are node lost and join, standby and online, and fence. Use either
3108
node names or extended regular expressions.
3112
node <node> [<node> ...]
3119
[[cmdhelp_history_log,log content]]
3122
Show messages logged on one or more nodes. Leaving out a node
3123
name produces combined logs of all nodes. Messages are sorted by
3124
time and, if the terminal emulations supports it, displayed in
3125
different colours depending on the node to allow for easier
3128
The sorting key is the timestamp as written by syslog which
3129
normally has the maximum resolution of one second. Obviously,
3130
messages generated by events which share the same timestamp may
3131
not be sorted in the same way as they happened. Such close events
3132
may actually happen fairly often.
3143
[[cmdhelp_history_exclude,exclude log messages]]
3146
If a log is infested with irrelevant messages, those messages may
3147
be excluded by specifying a regular expression. The regular
3148
expressions used are Python extended. This command is additive.
3149
To drop all regular expressions, use `exclude clear`. Run
3150
`exclude` only to see the current list of regular expressions.
3151
Excludes are saved along with the history sessions.
3155
exclude [<regex>|clear]
3159
exclude kernel.*ocfs2
3162
[[cmdhelp_history_peinputs,list or get PE input files]]
3165
Every event in the cluster results in generating one or more
3166
Policy Engine (PE) files. These files describe future motions of
3167
resources. The files are listed as full paths in the current
3168
report directory. Add `v` to also see the creation time stamps.
3172
peinputs [{<range>|<number>} ...] [v]
3179
peinputs 440:444 446
3183
[[cmdhelp_history_transition,show transition]]
3186
This command will print actions planned by the PE and run
3187
graphviz (`dotty`) to display a graphical representation of the
3188
transition. Of course, for the latter an X11 session is required.
3189
This command invokes `ptest(8)` in background.
3191
The `showdot` subcommand runs graphviz (`dotty`) to display a
3192
graphical representation of the `.dot` file which has been
3193
included in the report. Essentially, it shows the calculation
3194
produced by `pengine` which is installed on the node where the
3195
report was produced. In optimal case this output should not
3196
differ from the one produced by the locally installed `pengine`.
3198
The `log` subcommand shows the full log for the duration of the
3201
A transition can also be saved to a CIB shadow for further
3202
analysis or use with `cib` or `configure` commands (use the
3203
`save` subcommand). The shadow file name defaults to the name of
3206
If the PE input file number is not provided, it defaults to the
3207
last one, i.e. the last transition. The last transition can also
3208
be referenced with number 0. If the number is negative, then the
3209
corresponding transition relative to the last one is chosen.
3211
If there are warning and error PE input files or different nodes
3212
were the DC in the observed timeframe, it may happen that PE
3213
input file numbers collide. In that case provide some unique part
3214
of the path to the file.
3216
After the `ptest` output, logs about events that happened during
3217
the transition are printed.
3221
transition [<number>|<index>|<file>] [nograph] [v...] [scores] [actions] [utilization]
3222
transition showdot [<number>|<index>|<file>]
3223
transition log [<number>|<index>|<file>]
3224
transition save [<number>|<index>|<file> [name]]
3231
transition pe-error-3.bz2
3232
transition node-a/pengine/pe-input-2.bz2
3233
transition showdot 444
3235
transition save 0 enigma-22
3238
[[cmdhelp_history_show,show status or configuration of the PE input file]]
3241
Every transition is saved as a PE file. Use this command to
3242
render that PE file either as configuration or status. The
3243
configuration output is the same as `crm configure show`.
3249
pe :: <number>|<index>|<file>|live
3254
show pe-input-2080.bz2 status
3257
[[cmdhelp_history_graph,generate a directed graph from the PE file]]
3260
Create a graphviz graphical layout from the PE file (the
3261
transition). Every transition contains the cluster configuration
3262
which was active at the time. See also <<cmdhelp_configure_graph,generate a directed graph
3263
from configuration>>.
3267
graph <pe> [<gtype> [<file> [<img_format>]]]
3270
img_format :: `dot` output format (see the `-T` option)
3275
graph 322 dot clu1.conf.dot
3276
graph 322 dot clu1.conf.svg svg
3279
[[cmdhelp_history_diff,cluster states/transitions difference]]
3282
A transition represents a change in cluster configuration or
3283
state. Use `diff` to see what has changed between two
3286
If you want to specify the current cluster configuration and
3287
status, use the string `live`.
3289
Normally, the first transition specified should be the one which
3290
is older, but we are not going to enforce that.
3292
Note that a single configuration update may result in more than
3297
diff <pe> <pe> [status] [html]
3299
pe :: <number>|<index>|<file>|live
3304
diff pe-input-2080.bz2 live status
3307
[[cmdhelp_history_session,manage history sessions]]
3310
Sometimes you may want to get back to examining a particular
3311
history period or bug report. In order to make that easier, the
3312
current settings can be saved and later retrieved.
3314
If the current history being examined is coming from a live
3315
cluster the logs, PE inputs, and other files are saved too,
3316
because they may disappear from nodes. For the existing reports
3317
coming from `hb_report`, only the directory location is saved
3318
(not to waste space).
3320
A history session may also be packed into a tarball which can
3321
then be sent to support.
3323
Leave out subcommand to see the current session.
3327
session [{save|load|delete} <name> | pack [<name>] | update | list]
3331
session save bnc966622
3332
session load rsclost-2
3336
=== `end` (`cd`, `up`)
3338
The `end` command ends the current level and the user moves to
3339
the parent level. This command is available everywhere.
3348
The `help` command prints help for the current level or for the
3349
specified topic (command). This command is available everywhere.
3356
=== `quit` (`exit`, `bye`)
3362
Even though all sensible configurations (and most of those that
3363
are not) are going to be supported by the crm shell, I suspect
3364
that it may still happen that certain XML constructs may confuse
3365
the tool. When that happens, please file a bug report.
3367
The crm shell will not try to update the objects it does not
3368
understand. Of course, it is always possible to edit such objects
3373
Dejan Muhamedagic, <dejan@suse.de>
3378
crm_resource(8), crm_attribute(8), crm_mon(8), cib_shadow(8),
3379
ptest(8), dotty(1), crm_simulate(8), cibadmin(8)
3384
Copyright \(C) 2008-2011 Dejan Muhamedagic. Free use of this
3385
software is granted under the terms of the GNU General Public License (GPL).
3387
//////////////////////
3388
vim:ts=4:sw=4:expandtab:
3389
//////////////////////