1
.TH DLM.CONF 5 2012-04-09 dlm dlm
4
dlm.conf \- dlm_controld configuration file
7
The configuration options in dlm.conf mirror the dlm_controld
8
command line options. The config file additionally allows
9
advanced fencing and lockspace configuration that are not
10
supported on the command line.
12
.SH Command line equivalents
14
If an option is specified on the command line and in the config file, the
15
command line setting overrides the config file setting.
18
for descriptions and dlm_controld -h for defaults.
60
enable_concurrent_fencing
62
enable_startup_fencing
66
enable_quorum_lockspace
71
A fence device definition begins with a
73
line, followed by a number of
75
lines, one for each node connected to the device.
77
A blank line separates device definitions.
79
Devices are used in the order they are listed.
83
key word is followed by a unique
87
program to be used, and
89
which are agent arguments specific to the device.
93
key word is followed by the
95
of the device section, the node ID of the connected node in the format
99
which are agent arguments specific to the node for the given device.
103
is key=val on both device and connect lines, each pair separated by a space,
104
e.g. key1=val1 key2=val2 key3=val3.
131
device foo fence_foo ipaddr=1.1.1.1 login=x password=y
133
connect foo node=1 port=1
135
connect foo node=2 port=2
137
connect foo node=3 port=3
139
device bar fence_bar ipaddr=2.2.2.2 login=x password=y
141
connect bar node=1 port=1
143
connect bar node=2 port=2
145
connect bar node=3 port=3
149
Some devices, like dual power or dual path, must all be turned off in
150
parallel for fencing to succeed. To define multiple devices as being
151
parallel to each other, use the same base dev_name with different
152
suffixes and a colon separator between base name and suffix.
198
device foo:1 fence_foo ipaddr=1.1.1.1 login=x password=y
200
connect foo:1 node=1 port=1
202
connect foo:2 node=2 port=2
204
connect foo:3 node=3 port=3
206
device foo:2 fence_foo ipaddr=5.5.5.5 login=x password=y
208
connect foo:2 node=1 port=1
210
connect foo:2 node=2 port=2
212
connect foo:2 node=3 port=3
216
A node may sometimes need to "unfence" itself when starting. The
217
unfencing command reverses the effect of a previous fencing operation
218
against it. An example would be fencing that disables a port on a SAN
219
switch. A node could use unfencing to re-enable its switch port when
220
starting up after rebooting. (Care must be taken to ensure it's safe for
221
a node to unfence itself. A node often needs to be cleanly rebooted
222
before unfencing itself.)
224
To specify that a node should unfence itself for a given
228
line is added after the
254
.BI "unfence " dev_name
258
device foo fence_foo ipaddr=1.1.1.1 login=x password=y
260
connect foo node=1 port=1
262
connect foo node=2 port=2
264
connect foo node=3 port=3
270
In some cases, a single fence device is used for all nodes, and it
271
requires no node-specific args. This would typically be a "bridge" fence
272
device in which an agent is passing a fence request to another subsystem
273
to handle. (Note that a "node=nodeid" arg is always automatically
274
included in agent args, so a node-specific nodeid is always present to
275
minimally identify the victim.)
277
In such a case, a simplified, single-line fence configuration is possible,
286
fence_all dlm_stonith
288
A fence_all configuration is not compatible with a fence device
289
configuration (above).
291
Unfencing can optionally be applied with:
299
.SH Lockspace configuration
301
A lockspace definition begins with a
303
line, followed by a number of
305
lines. A blank line separates lockspace definitions.
329
.SS Disabling resource directory
331
Lockspaces usually use a resource directory to keep track of which node is
332
the master of each resource. The dlm can operate without the resource
333
directory, though, by statically assigning the master of a resource using
334
a hash of the resource name. To enable, set the per-lockspace
340
lockspace foo nodir=1
342
.SS Lock-server configuration
344
The nodir setting can be combined with node weights to create a
345
configuration where select node(s) are the master of all resources/locks.
346
These master nodes can be viewed as "lock servers" for the other nodes.
348
Example of nodeid 1 as master of all resources:
350
lockspace foo nodir=1
354
Example of nodeid's 1 and 2 as masters of all resources:
356
lockspace foo nodir=1
362
Lock management will be partitioned among the available masters. There
363
can be any number of masters defined. The designated master nodes will
364
master all resources/locks (according to the resource name hash). When no
365
masters are members of the lockspace, then the nodes revert to the common
366
fully-distributed configuration. Recovery is faster, with little
367
disruption, when a non-master node joins/leaves.
369
There is no special mode in the dlm for this lock server configuration,
370
it's just a natural consequence of combining the "nodir" option with node
371
weights. When a lockspace has master nodes defined, the master has a
372
default weight of 1 and all non-master nodes have weight of 0. An explicit
375
can also be assigned to master nodes, e.g.
377
lockspace foo nodir=1
379
master node=1 weight=2
381
master node=2 weight=1
383
In which case node 1 will master 2/3 of the total resources and node 2
384
will master the other 1/3.
387
.BR dlm_controld (8),