1
Tools that manage md devices can be found at
2
http://www.kernel.org/pub/linux/utils/raid/
5
Boot time assembly of RAID arrays
6
---------------------------------
8
You can boot with your md device with the following kernel command
11
for old raid arrays without persistent superblocks:
12
md=<md device no.>,<raid level>,<chunk size factor>,<fault level>,dev0,dev1,...,devn
14
for raid arrays with persistent superblocks
15
md=<md device no.>,dev0,dev1,...,devn
16
or, to assemble a partitionable array:
17
md=d<md device no.>,dev0,dev1,...,devn
19
md device no. = the number of the md device ...
26
raid level = -1 linear mode
28
other modes are only supported with persistent super blocks
30
chunk size factor = (raid-0 and raid-1 only)
31
Set the chunk size as 4k << n.
33
fault level = totally ignored
35
dev0-devn: e.g. /dev/hda1,/dev/hdc1,/dev/sda1,/dev/sdb1
37
A possible loadlin line (Harald Hoyer <HarryH@Royal.Net>) looks like this:
39
e:\loadlin\loadlin e:\zimage root=/dev/md0 md=0,0,4,0,/dev/hdb2,/dev/hdc3 ro
42
Boot time autodetection of RAID arrays
43
--------------------------------------
45
When md is compiled into the kernel (not as module), partitions of
46
type 0xfd are scanned and automatically assembled into RAID arrays.
47
This autodetection may be suppressed with the kernel parameter
48
"raid=noautodetect". As of kernel 2.6.9, only drives with a type 0
49
superblock can be autodetected and run at boot time.
51
The kernel parameter "raid=partitionable" (or "raid=part") means
52
that all auto-detected arrays are assembled as partitionable.
54
Boot time assembly of degraded/dirty arrays
55
-------------------------------------------
57
If a raid5 or raid6 array is both dirty and degraded, it could have
58
undetectable data corruption. This is because the fact that it is
59
'dirty' means that the parity cannot be trusted, and the fact that it
60
is degraded means that some datablocks are missing and cannot reliably
61
be reconstructed (due to no parity).
63
For this reason, md will normally refuse to start such an array. This
64
requires the sysadmin to take action to explicitly start the array
65
despite possible corruption. This is normally done with
66
mdadm --assemble --force ....
68
This option is not really available if the array has the root
69
filesystem on it. In order to support this booting from such an
70
array, md supports a module parameter "start_dirty_degraded" which,
71
when set to 1, bypassed the checks and will allows dirty degraded
74
So, to boot with a root filesystem of a dirty degraded raid[56], use
76
md-mod.start_dirty_degraded=1
82
The md driver can support a variety of different superblock formats.
83
Currently, it supports superblock formats "0.90.0" and the "md-1" format
84
introduced in the 2.5 development series.
86
The kernel will autodetect which format superblock is being used.
88
Superblock format '0' is treated differently to others for legacy
89
reasons - it is the original superblock format.
92
General Rules - apply for all superblock formats
93
------------------------------------------------
95
An array is 'created' by writing appropriate superblocks to all
98
It is 'assembled' by associating each of these devices with an
99
particular md virtual device. Once it is completely assembled, it can
102
An array should be created by a user-space tool. This will write
103
superblocks to all devices. It will usually mark the array as
104
'unclean', or with some devices missing so that the kernel md driver
105
can create appropriate redundancy (copying in raid1, parity
106
calculation in raid4/5).
108
When an array is assembled, it is first initialized with the
109
SET_ARRAY_INFO ioctl. This contains, in particular, a major and minor
110
version number. The major version number selects which superblock
111
format is to be used. The minor number might be used to tune handling
112
of the format, such as suggesting where on each device to look for the
115
Then each device is added using the ADD_NEW_DISK ioctl. This
116
provides, in particular, a major and minor number identifying the
119
The array is started with the RUN_ARRAY ioctl.
121
Once started, new devices can be added. They should have an
122
appropriate superblock written to them, and then passed be in with
125
Devices that have failed or are not yet active can be detached from an
126
array using HOT_REMOVE_DISK.
129
Specific Rules that apply to format-0 super block arrays, and
130
arrays with no superblock (non-persistent).
131
-------------------------------------------------------------
133
An array can be 'created' by describing the array (level, chunksize
134
etc) in a SET_ARRAY_INFO ioctl. This must has major_version==0 and
137
Then uninitialized devices can be added with ADD_NEW_DISK. The
138
structure passed to ADD_NEW_DISK must specify the state of the device
139
and its role in the array.
141
Once started with RUN_ARRAY, uninitialized spares can be added with
148
md devices appear in sysfs (/sys) as regular block devices,
152
Each 'md' device will contain a subdirectory called 'md' which
153
contains further md-specific information about the device.
155
All md devices contain:
157
a text file indicating the 'raid level'. e.g. raid0, raid1,
158
raid5, linear, multipath, faulty.
159
If no raid level has been set yet (array is still being
160
assembled), the value will reflect whatever has been written
161
to it, which may be a name like the above, or may be a number
162
such as '0', '5', etc.
165
a text file with a simple number indicating the number of devices
166
in a fully functional array. If this is not yet known, the file
167
will be empty. If an array is being resized this will contain
168
the new number of devices.
169
Some raid levels allow this value to be set while the array is
170
active. This will reconfigure the array. Otherwise it can only
171
be set while assembling an array.
172
A change to this attribute will not be permitted if it would
173
reduce the size of the array. To reduce the number of drives
174
in an e.g. raid5, the array size must first be reduced by
175
setting the 'array_size' attribute.
178
This is the size in bytes for 'chunks' and is only relevant to
179
raid levels that involve striping (0,4,5,6,10). The address space
180
of the array is conceptually divided into chunks and consecutive
181
chunks are striped onto neighbouring devices.
182
The size should be at least PAGE_SIZE (4k) and should be a power
183
of 2. This can only be set while assembling an array
186
The "layout" for the array for the particular level. This is
187
simply a number that is interpretted differently by different
188
levels. It can be written while assembling an array.
191
This can be used to artificially constrain the available space in
192
the array to be less than is actually available on the combined
193
devices. Writing a number (in Kilobytes) which is less than
194
the available size will set the size. Any reconfiguration of the
195
array (e.g. adding devices) will not cause the size to change.
196
Writing the word 'default' will cause the effective size of the
197
array to be whatever size is actually available based on
198
'level', 'chunk_size' and 'component_size'.
200
This can be used to reduce the size of the array before reducing
201
the number of devices in a raid4/5/6, or to support external
202
metadata formats which mandate such clipping.
205
This is either "none" or a sector number within the devices of
206
the array where "reshape" is up to. If this is set, the three
207
attributes mentioned above (raid_disks, chunk_size, layout) can
208
potentially have 2 values, an old and a new value. If these
209
values differ, reading the attribute returns
211
and writing will effect the 'new' value, leaving the 'old'
215
For arrays with data redundancy (i.e. not raid0, linear, faulty,
216
multipath), all components must be the same size - or at least
217
there must a size that they all provide space for. This is a key
218
part or the geometry of the array. It is measured in sectors
219
and can be read from here. Writing to this value may resize
220
the array if the personality supports it (raid1, raid5, raid6),
221
and if the component drives are large enough.
224
This indicates the format that is being used to record metadata
225
about the array. It can be 0.90 (traditional format), 1.0, 1.1,
226
1.2 (newer format in varying locations) or "none" indicating that
227
the kernel isn't managing metadata at all.
228
Alternately it can be "external:" followed by a string which
229
is set by user-space. This indicates that metadata is managed
230
by a user-space program. Any device failure or other event that
231
requires a metadata update will cause array activity to be
232
suspended until the event is acknowledged.
235
The point at which resync should start. If no resync is needed,
236
this will be a very large number (or 'none' since 2.6.30-rc1). At
237
array creation it will default to 0, though starting the array as
238
'clean' will set it much larger.
241
This file can be written but not read. The value written should
242
be a block device number as major:minor. e.g. 8:0
243
This will cause that device to be attached to the array, if it is
244
available. It will then appear at md/dev-XXX (depending on the
245
name of the device) and further configuration is then possible.
248
When an md array has seen no write requests for a certain period
249
of time, it will be marked as 'clean'. When another write
250
request arrives, the array is marked as 'dirty' before the write
251
commences. This is known as 'safe_mode'.
252
The 'certain period' is controlled by this file which stores the
253
period as a number of seconds. The default is 200msec (0.200).
254
Writing a value of 0 disables safemode.
257
This file contains a single word which describes the current
258
state of the array. In many cases, the state can be set by
259
writing the word for the desired state, however some states
260
cannot be explicitly set, and some transitions are not allowed.
262
Select/poll works on this file. All changes except between
263
active_idle and active (which can be frequent and are not
264
very interesting) are notified. active->active_idle is
265
reported if the metadata is externally managed.
268
No devices, no size, no level
269
Writing is equivalent to STOP_ARRAY ioctl
271
May have some settings, but array is not active
272
all IO results in error
273
When written, doesn't tear down array, but just stops it
274
suspended (not supported yet)
275
All IO requests will block. The array can be reconfigured.
276
Writing this, if accepted, will block until array is quiessent
278
no resync can happen. no superblocks get written.
281
like readonly, but behaves like 'clean' on a write request.
283
clean - no pending writes, but otherwise active.
284
When written to inactive array, starts without resync
285
If a write request arrives then
286
if metadata is known, mark 'dirty' and switch to 'active'.
287
if not known, block and switch to write-pending
288
If written to an active array that has pending writes, then fails.
290
fully active: IO and resync can be happening.
291
When written to inactive array, starts with resync
294
clean, but writes are blocked waiting for 'active' to be written.
297
like active, but no writes have been seen for a while (safe_mode_delay).
300
This indicates where the write-intent bitmap for the array is
302
It can be one of "none", "file" or "[+-]N".
303
"file" may later be extended to "file:/file/name"
304
"[+-]N" means that many sectors from the start of the metadata.
305
This is replicated on all devices. For arrays with externally
306
managed metadata, the offset is from the beginning of the
309
The size, in bytes, of the chunk which will be represented by a
310
single bit. For RAID456, it is a portion of an individual
311
device. For RAID10, it is a portion of the array. For RAID1, it
312
is both (they come to the same thing).
314
The time, in seconds, between looking for bits in the bitmap to
315
be cleared. In the current implementation, a bit will be cleared
316
between 2 and 3 times "time_base" after all the covered blocks
317
are known to be in-sync.
319
When write-mostly devices are active in a RAID1, write requests
320
to those devices proceed in the background - the filesystem (or
321
other user of the device) does not have to wait for them.
322
'backlog' sets a limit on the number of concurrent background
323
writes. If there are more than this, new writes will by
326
This can be either 'internal' or 'external'.
327
'internal' is the default and means the metadata for the bitmap
328
is stored in the first 256 bytes of the allocated space and is
329
managed by the md module.
330
'external' means that bitmap metadata is managed externally to
331
the kernel (i.e. by some userspace program)
333
This is either 'true' or 'false'. If 'true', then bits in the
334
bitmap will be cleared when the corresponding blocks are thought
335
to be in-sync. If 'false', bits will never be cleared.
336
This is automatically set to 'false' if a write happens on a
337
degraded array, or if the array becomes degraded during a write.
338
When metadata is managed externally, it should be set to true
339
once the array becomes non-degraded, and this fact has been
340
recorded in the metadata.
345
As component devices are added to an md array, they appear in the 'md'
346
directory as new directories named
348
where XXX is a name that the kernel knows for the device, e.g. hdb1.
349
Each directory contains:
352
a symlink to the block device in /sys/block, e.g.
353
/sys/block/md0/md/dev-hdb1/block -> ../../../../block/hdb/hdb1
356
A file containing an image of the superblock read from, or
357
written to, that device.
360
A file recording the current state of the device in the array
361
which can be a comma separated list of
362
faulty - device has been kicked from active use due to
363
a detected fault or it has unacknowledged bad
365
in_sync - device is a fully in-sync member of the array
366
writemostly - device will only be subject to read
367
requests if there are no other options.
368
This applies only to raid1 arrays.
369
blocked - device has failed, and the failure hasn't been
370
acknowledged yet by the metadata handler.
371
Writes that would write to this device if
372
it were not faulty are blocked.
373
spare - device is working, but not a full member.
374
This includes spares that are in the process
375
of being recovered to
376
write_error - device has ever seen a write error.
377
This list may grow in future.
378
This can be written to.
379
Writing "faulty" simulates a failure on the device.
380
Writing "remove" removes the device from the array.
381
Writing "writemostly" sets the writemostly flag.
382
Writing "-writemostly" clears the writemostly flag.
383
Writing "blocked" sets the "blocked" flag.
384
Writing "-blocked" clears the "blocked" flags and allows writes
385
to complete and possibly simulates an error.
386
Writing "in_sync" sets the in_sync flag.
387
Writing "write_error" sets writeerrorseen flag.
388
Writing "-write_error" clears writeerrorseen flag.
390
This file responds to select/poll. Any change to 'faulty'
391
or 'blocked' causes an event.
394
An approximate count of read errors that have been detected on
395
this device but have not caused the device to be evicted from
396
the array (either because they were corrected or because they
397
happened while the array was read-only). When using version-1
398
metadata, this value persists across restarts of the array.
400
This value can be written while assembling an array thus
401
providing an ongoing count for arrays with metadata managed by
405
This gives the role that the device has in the array. It will
406
either be 'none' if the device is not active in the array
407
(i.e. is a spare or has failed) or an integer less than the
408
'raid_disks' number for the array indicating which position
409
it currently fills. This can only be set while assembling an
410
array. A device for which this is set is assumed to be working.
413
This gives the location in the device (in sectors from the
414
start) where data from the array will be stored. Any part of
415
the device before this offset us not touched, unless it is
416
used for storing metadata (Formats 1.1 and 1.2).
419
The amount of the device, after the offset, that can be used
420
for storage of data. This will normally be the same as the
421
component_size. This can be written while assembling an
422
array. If a value less than the current component_size is
423
written, it will be rejected.
426
When the device is not 'in_sync', this records the number of
427
sectors from the start of the device which are known to be
428
correct. This is normally zero, but during a recovery
429
operation is will steadily increase, and if the recovery is
430
interrupted, restoring this value can cause recovery to
431
avoid repeating the earlier blocks. With v1.x metadata, this
432
value is saved and restored automatically.
434
This can be set whenever the device is not an active member of
435
the array, either before the array is activated, or before
438
Setting this to 'none' is equivalent to setting 'in_sync'.
439
Setting to any other value also clears the 'in_sync' flag.
442
This gives the list of all known bad blocks in the form of
443
start address and length (in sectors respectively). If output
444
is too big to fit in a page, it will be truncated. Writing
445
"sector length" to this file adds new acknowledged (i.e.
446
recorded to disk safely) bad blocks.
448
unacknowledged_bad_blocks
449
This gives the list of known-but-not-yet-saved-to-disk bad
450
blocks in the same form of 'bad_blocks'. If output is too big
451
to fit in a page, it will be truncated. Writing to this file
452
adds bad blocks without acknowledging them. This is largely
457
An active md device will also contain and entry for each active device
458
in the array. These are named
462
where 'NN' is the position in the array, starting from 0.
463
So for a 3 drive array there will be rd0, rd1, rd2.
464
These are symbolic links to the appropriate 'dev-XXX' entry.
466
cat /sys/block/md*/md/rd*/state
467
will show 'in_sync' on every line.
471
Active md devices for levels that support data redundancy (1,4,5,6)
475
a text file that can be used to monitor and control the rebuild
476
process. It contains one word which can be one of:
477
resync - redundancy is being recalculated after unclean
479
recover - a hot spare is being built to replace a
480
failed/missing device
481
idle - nothing is happening
482
check - A full check of redundancy was requested and is
483
happening. This reads all block and checks
484
them. A repair may also happen for some raid
486
repair - A full check and repair is happening. This is
487
similar to 'resync', but was requested by the
488
user, and the write-intent bitmap is NOT used to
489
optimise the process.
491
This file is writable, and each of the strings that could be
492
read are meaningful for writing.
494
'idle' will stop an active resync/recovery etc. There is no
495
guarantee that another resync/recovery may not be automatically
496
started again, though some event will be needed to trigger
498
'resync' or 'recovery' can be used to restart the
499
corresponding operation if it was stopped with 'idle'.
500
'check' and 'repair' will start the appropriate process
501
providing the current state is 'idle'.
503
This file responds to select/poll. Any important change in the value
504
triggers a poll event. Sometimes the value will briefly be
505
"recover" if a recovery seems to be needed, but cannot be
506
achieved. In that case, the transition to "recover" isn't
507
notified, but the transition away is.
510
This contains a count of the number of devices by which the
511
arrays is degraded. So an optimal array with show '0'. A
512
single failed/missing drive will show '1', etc.
513
This file responds to select/poll, any increase or decrease
514
in the count of missing devices will trigger an event.
517
When performing 'check' and 'repair', and possibly when
518
performing 'resync', md will count the number of errors that are
519
found. The count in 'mismatch_cnt' is the number of sectors
520
that were re-written, or (for 'check') would have been
521
re-written. As most raid levels work in units of pages rather
522
than sectors, this my be larger than the number of actual errors
523
by a factor of the number of sectors in a page.
526
If the array has a write-intent bitmap, then writing to this
527
attribute can set bits in the bitmap, indicating that a resync
528
would need to check the corresponding blocks. Either individual
529
numbers or start-end pairs can be written. Multiple numbers
530
can be separated by a space.
531
Note that the numbers are 'bit' numbers, not 'block' numbers.
532
They should be scaled by the bitmap_chunksize.
536
This are similar to /proc/sys/dev/raid/speed_limit_{min,max}
537
however they only apply to the particular array.
538
If no value has been written to these, of if the word 'system'
539
is written, then the system-wide value is used. If a value,
540
in kibibytes-per-second is written, then it is used.
541
When the files are read, they show the currently active value
542
followed by "(local)" or "(system)" depending on whether it is
543
a locally set or system-wide value.
546
This shows the number of sectors that have been completed of
547
whatever the current sync_action is, followed by the number of
548
sectors in total that could need to be processed. The two
549
numbers are separated by a '/' thus effectively showing one
550
value, a fraction of the process that is complete.
551
A 'select' on this attribute will return when resync completes,
552
when it reaches the current sync_max (below) and possibly at
556
This is a number of sectors at which point a resync/recovery
557
process will pause. When a resync is active, the value can
558
only ever be increased, never decreased. The value of 'max'
559
effectively disables the limit.
563
This shows the current actual speed, in K/sec, of the current
564
sync_action. It is averaged over the last 30 seconds.
568
The two values, given as numbers of sectors, indicate a range
569
within the array where IO will be blocked. This is currently
570
only supported for raid4/5/6.
574
The two values, given as numbers of sectors, indicate a range
575
within the array where 'check'/'repair' will operate. Must be
576
a multiple of chunk_size. When it reaches "sync_max" it will
577
pause, rather than complete.
578
You can use 'select' or 'poll' on "sync_completed" to wait for
579
that number to reach sync_max. Then you can either increase
580
"sync_max", or can write 'idle' to "sync_action".
583
Each active md device may also have attributes specific to the
584
personality module that manages it.
585
These are specific to the implementation of the module and could
586
change substantially if the implementation changes.
588
These currently include
590
stripe_cache_size (currently raid5 only)
591
number of entries in the stripe cache. This is writable, but
592
there are upper and lower limits (32768, 16). Default is 128.
593
strip_cache_active (currently raid5 only)
594
number of active entries in the stripe cache
595
preread_bypass_threshold (currently raid5 only)
596
number of times a stripe requiring preread will be bypassed by
597
a stripe that does not require preread. For fairness defaults
598
to 1. Setting this to 0 disables bypass accounting and
599
requires preread stripes to wait until all full-width stripe-
600
writes are complete. Valid values are 0 to stripe_cache_size.