123
123
Grow (or shrink) an array, or otherwise reshape it in some way.
124
124
Currently supported growth options including changing the active size
125
of component devices and changing the number of active devices in RAID
126
levels 1/4/5/6, changing the RAID level between 1, 5, and 6, changing
127
the chunk size and layout for RAID5 and RAID5, as well as adding or
125
of component devices and changing the number of active devices in
126
Linear and RAID levels 0/1/4/5/6,
127
changing the RAID level between 0, 1, 5, and 6, and between 0 and 10,
128
changing the chunk size and layout for RAID 0,4,5,6, as well as adding or
128
129
removing a write-intent bitmap.
323
324
Use the original 0.90 format superblock. This format limits arrays to
324
325
28 component devices and limits component devices of levels 1 and
325
greater to 2 terabytes.
326
greater to 2 terabytes. It is also possible for there to be confusion
327
about whether the superblock applies to a whole device or just the
328
last partition, if that partition starts on a 64K boundary.
326
329
.ie '{DEFAULT_METADATA}'0.90'
327
330
.IP "1, 1.0, 1.1, 1.2"
329
332
.IP "1, 1.0, 1.1, 1.2 default"
331
Use the new version-1 format superblock. This has few restrictions.
332
The different sub-versions store the superblock at different locations
333
on the device, either at the end (for 1.0), at the start (for 1.1) or
334
4K from the start (for 1.2). "1" is equivalent to "1.0".
334
Use the new version-1 format superblock. This has fewer restrictions.
335
It can easily be moved between hosts with different endian-ness, and a
336
recovery operation can be checkpointed and restarted. The different
337
sub-versions store the superblock at different locations on the
338
device, either at the end (for 1.0), at the start (for 1.1) or 4K from
339
the start (for 1.2). "1" is equivalent to "1.0".
335
340
'if '{DEFAULT_METADATA}'1.2' "default" is equivalent to "1.2".
337
342
Use the "Industry Standard" DDF (Disk Data Format) format defined by
421
429
which means to choose the largest size that fits on all current drives.
431
Before reducing the size of the array (with
432
.BR "\-\-grow \-\-size=" )
433
you should make sure that space isn't needed. If the device holds a
434
filesystem, you would need to resize the filesystem to use less space.
436
After reducing the array size you should check that the data stored in
437
the device is still available. If the device holds a filesystem, then
438
an 'fsck' of the filesystem is a minimum requirement. If there are
439
problems the array can be made bigger again with no loss with another
440
.B "\-\-grow \-\-size="
423
443
This value can not be used with
425
445
metadata such as DDF and IMSM.
428
.BR \-Z ", " \-\-array-size=
448
.BR \-Z ", " \-\-array\-size=
429
449
This is only meaningful with
431
and its effect is not persistent: when the array is stopped an
451
and its effect is not persistent: when the array is stopped and
432
452
restarted the default array size will be restored.
434
454
Setting the array-size causes the array to appear smaller to programs
439
459
is, it is required that the array size is reduced as appropriate
440
460
before the number of devices in the array is reduced.
462
Before reducing the size of the array you should make sure that space
463
isn't needed. If the device holds a filesystem, you would need to
464
resize the filesystem to use less space.
466
After reducing the array size you should check that the data stored in
467
the device is still available. If the device holds a filesystem, then
468
an 'fsck' of the filesystem is a minimum requirement. If there are
469
problems the array can be made bigger again with no loss with another
470
.B "\-\-grow \-\-array\-size="
473
A suffix of 'M' or 'G' can be given to indicate Megabytes or
474
Gigabytes respectively.
477
restores the apparent size of the array to be whatever the real
478
amount of available space is.
443
481
.BR \-c ", " \-\-chunk=
444
482
Specify chunk size of kibibytes. The default when creating an
446
484
default when Building and array with no persistent metadata is 64KB.
447
485
This is only meaningful for RAID0, RAID4, RAID5, RAID6, and RAID10.
487
RAID4, RAID5, RAID6, and RAID10 require the chunk size to be a power
488
of 2. In any case it must be a multiple of 4KB.
490
A suffix of 'M' or 'G' can be given to indicate Megabytes or
491
Gigabytes respectively.
450
494
.BR \-\-rounding=
451
495
Specify rounding factor for a Linear array. The size of each
655
702
actually clean. If that is the case, such as after running
656
703
badblocks, this argument can be used to tell mdadm the
657
704
facts the operator knows.
706
When an array is resized to a larger size with
707
.B "\-\-grow \-\-size="
708
the new space is normally resynced in that same way that the whole
709
array is resynced at creation. From Linux version 2.6.40,
711
can be used with that command to avoid the automatic resync.
660
714
.BR \-\-backup\-file=
661
715
This is needed when
663
is used to increase the number of
664
raid-devices in a RAID5 if there are no spare devices available.
665
See the GROW MODE section below on RAID\-DEVICES CHANGES. The file
666
should be stored on a separate device, not on the RAID array being
670
.BR \-\-array-size= ", " \-Z
671
Set the size of the array which is seen by users of the device such as
672
filesystems. This can be less that the real size, but never greater.
673
The size set this way does not persist across restarts of the array.
675
This is most useful when reducing the number of devices in a RAID5 or
676
RAID6. Such arrays require the array-size to be reduced before a
677
reshape can be performed that reduces the real size.
681
restores the apparent size of the array to be whatever the real
682
amount of available space is.
717
is used to increase the number of raid-devices in a RAID5 or RAID6 if
718
there are no spare devices available, or to shrink, change RAID level
719
or layout. See the GROW MODE section below on RAID\-DEVICES CHANGES.
720
The file must be stored on a separate device, not on the RAID array
685
724
.BR \-N ", " \-\-name=
873
912
See this option under Create and Build options.
915
.BR \-a ", " "\-\-add"
916
This option can be used in Grow mode in two cases.
918
If the target array is a Linear array, then
920
can be used to add one or more devices to the array. They
921
are simply catenated on to the end of the array. Once added, the
922
devices cannot be removed.
926
option is being used to increase the number of devices in an array,
929
can be used to add some extra devices to be included in the array.
930
In most cases this is not needed as the extra devices can be added as
931
spares first, and then the number of raid-disks can be changed.
932
However for RAID0, it is not possible to add spares. So to increase
933
the number of devices in a RAID0, it is necessary to set the new
934
number of devices, and to add the new devices, in the same command.
876
937
.BR \-b ", " \-\-bitmap=
877
938
Specify the bitmap file that was given when the array was created. If
883
944
.BR \-\-backup\-file=
885
946
.B \-\-backup\-file
886
was used to grow the number of raid-devices in a RAID5, and the system
887
crashed during the critical section, then the same
947
was used while reshaping an array (e.g. changing number of devices or
948
chunk size) and the system crashed during the critical section, then the same
888
949
.B \-\-backup\-file
889
950
must be presented to
891
to allow possibly corrupted data to be restored.
952
to allow possibly corrupted data to be restored, and the reshape
956
.BR \-\-invalid\-backup
957
If the file needed for the above option is not available for any
958
reason an empty file can be given together with this option to
959
indicate that the backup file is invalid. In this case the data that
960
was being rearranged at the time of the crash could be irrecoverably
961
lost, but the rest of the array may still be recoverable. This option
962
should only be used as a last resort if there is no way to recover the
894
967
.BR \-U ", " \-\-update=
998
1072
to determine the maximum usable amount of space on each device and
999
1073
update the relevant field in the metadata.
1003
.B \-\-auto\-update\-homehost
1004
This flag is only meaningful with auto-assembly (see discussion below).
1005
In that situation, if no suitable arrays are found for this homehost,
1007
will rescan for any arrays at all and will assemble them and update the
1008
homehost to match the current host.
1077
option can be used when an array has an internal bitmap which is
1078
corrupt in some way so that assembling the array normally fails. It
1079
will cause any internal bitmap to be ignored.
1011
1081
.SH For Manage mode:
1549
1644
.IR mdadm.conf (5)
1550
1645
for further details.
1555
cannot find any array for the given host at all, and if
1556
.B \-\-auto\-update\-homehost
1559
will search again for any array (not just an array created for this
1560
host) and will assemble each assuming
1561
.BR \-\-update=homehost .
1562
This will change the host tag in the superblock so that on the next run,
1563
these arrays will be found without the second pass. The intention of
1564
this feature is to support transitioning a set of md arrays to using
1567
The reason for requiring arrays to be tagged with the homehost for
1568
auto assembly is to guard against problems that can arise when moving
1569
devices from one host to another.
1647
Note: Auto assembly cannot be used for assembling and activating some
1648
arrays which are undergoing reshape. In particular as the
1650
cannot be given, any reshape which requires a backup-file to continue
1651
cannot be started by auto assembly. An array which is growing to more
1652
devices and has passed the critical section can be assembled using
2092
2180
If the removal succeeds but the adding fails, then it is added back to
2093
2181
the original array.
2183
If the spare group for a degraded array is not defined,
2185
will look at the rules of spare migration specified by POLICY lines in
2187
and then follow similar steps as above if a matching spare is found.
2096
2190
The GROW mode is used for changing the size or shape of an active
2098
2192
For this to work, the kernel must support the necessary change.
2099
Various types of growth are being added during 2.6 development,
2100
including restructuring a RAID5 array to have more active devices.
2193
Various types of growth are being added during 2.6 development.
2102
Currently the only support available is to
2104
change the "size" attribute
2105
for RAID1, RAID5 and RAID6.
2107
increase or decrease the "raid\-devices" attribute of RAID1, RAID5,
2110
change the chunk-size and layout of RAID5 and RAID6.
2112
convert between RAID1 and RAID5, and between RAID5 and RAID6.
2195
Currently the supported changes include
2197
change the "size" attribute for RAID1, RAID4, RAID5 and RAID6.
2199
increase or decrease the "raid\-devices" attribute of RAID0, RAID1, RAID4,
2202
change the chunk-size and layout of RAID0, RAID4, RAID5 and RAID6.
2204
convert between RAID1 and RAID5, between RAID5 and RAID6, between
2205
RAID0, RAID5, and RAID5, and between RAID0 and RAID10 (in the near-2 mode).
2114
2207
add a write-intent bitmap to any array which supports these bitmaps, or
2115
2208
remove a write-intent bitmap from such an array.
2118
GROW mode is not currently supported for
2120
or arrays inside containers.
2211
Using GROW on containers is currently only support for Intel's IMSM
2212
container format. The number of devices in a container can be
2213
increased - which affects all arrays in the container - or an array
2214
in a container can be converted between levels where those levels are
2215
supported by the container, and the conversion is on of those listed
2218
Grow functionality (e.g. expand a number of raid devices) for Intel's
2219
IMSM container format has an experimental status. It is guarded by the
2220
.B MDADM_EXPERIMENTAL
2221
environment variable which must be set to '1' for a GROW command to
2223
This is for the following reasons:
2226
Intel's native IMSM check-pointing is not fully tested yet.
2227
This can causes IMSM incompatibility during the grow process: an array
2228
which is growing cannot roam between Microsoft Windows(R) and Linux
2232
Interrupting a grow operation is not recommended, because it
2233
has not been fully tested for Intel's IMSM container format yet.
2236
Note: Intel's native checkpointing doesn't use
2238
option and it is transparent for assembly feature.
2122
2240
.SS SIZE CHANGES
2123
Normally when an array is built the "size" it taken from the smallest
2241
Normally when an array is built the "size" is taken from the smallest
2124
2242
of the drives. If all the small drives in an arrays are, one at a
2125
2243
time, removed and replaced with larger drives, then you could have an
2126
2244
array of large drives with only a small amount used. In this
2130
2248
are synchronised.
2132
2250
Note that when an array changes size, any filesystem that may be
2133
stored in the array will not automatically grow to use the space. The
2134
filesystem will need to be explicitly told to use the extra space.
2251
stored in the array will not automatically grow for shrink to use or
2252
vacate the space. The
2253
filesystem will need to be explicitly told to use the extra space
2254
after growing, or to reduce its size
2256
to shrinking the array.
2136
2258
Also the size of an array cannot be changed while it has an active
2137
2259
bitmap. If an array has a bitmap, it must be removed before the size
2159
2281
an interrupted "reshape". From 2.6.31, the Linux Kernel is able to
2160
2282
increase or decrease the number of devices in a RAID5 or RAID6.
2284
From 2.6.35, the Linux Kernel is able to convert a RAID0 in to a RAID4
2287
uses this functionality and the ability to add
2288
devices to a RAID4 to allow devices to be added to a RAID0. When
2289
requested to do this,
2291
will convert the RAID0 to a RAID4, add the necessary disks and make
2292
the reshape happen, and then convert the RAID4 back to RAID0.
2162
2294
When decreasing the number of devices, the size of the array will also
2163
2295
decrease. If there was data in the array, it could get destroyed and
2164
this is not reversible. To help prevent accidents,
2296
this is not reversible, so you should firstly shrink the filesystem on
2297
the array to fit within the new size. To help prevent accidents,
2166
2299
requires that the size of the array be decreased first with
2167
2300
.BR "mdadm --grow --array-size" .
2169
2302
inaccessible. The integrity of any data can then be checked before
2170
2303
the non-reversible reduction in the number of devices is request.
2172
When relocating the first few stripes on a RAID5, it is not possible
2173
to keep the data on disk completely consistent and crash-proof. To
2174
provide the required safety, mdadm disables writes to the array while
2175
this "critical section" is reshaped, and takes a backup of the data
2176
that is in that section. This backup is normally stored in any spare
2177
devices that the array has, however it can also be stored in a
2178
separate file specified with the
2305
When relocating the first few stripes on a RAID5 or RAID6, it is not
2306
possible to keep the data on disk completely consistent and
2307
crash-proof. To provide the required safety, mdadm disables writes to
2308
the array while this "critical section" is reshaped, and takes a
2309
backup of the data that is in that section. For grows, this backup may be
2310
stored in any spare devices that the array has, however it can also be
2311
stored in a separate file specified with the
2179
2312
.B \-\-backup\-file
2180
option. If this option is used, and the system does crash during the
2181
critical period, the same file must be passed to
2313
option, and is required to be specified for shrinks, RAID level
2314
changes and layout changes. If this option is used, and the system
2315
does crash during the critical period, the same file must be passed to
2182
2316
.B \-\-assemble
2183
to restore the backup and reassemble the array.
2317
to restore the backup and reassemble the array. When shrinking rather
2318
than growing the array, the reshape is done from the end towards the
2319
beginning, so the "critical section" is at the end of the reshape.
2185
2321
.SS LEVEL CHANGES
2187
2323
Changing the RAID level of any array happens instantaneously. However
2188
in the RAID to RAID6 case this requires a non-standard layout of the
2324
in the RAID5 to RAID6 case this requires a non-standard layout of the
2189
2325
RAID6 data, and in the RAID6 to RAID5 case that non-standard layout is
2190
required before the change can be accomplish. So while the level
2326
required before the change can be accomplished. So while the level
2191
2327
change is instant, the accompanying layout change can take quite a
2330
is required. If the array is not simultaneously being grown or
2331
shrunk, so that the array size will remain the same - for example,
2332
reshaping a 3-drive RAID5 into a 4-drive RAID6 - the backup file will
2333
be used not just for a "cricital section" but throughout the reshape
2334
operation, as described below under LAYOUT CHANGES.
2194
2336
.SS CHUNK-SIZE AND LAYOUT CHANGES
2198
2340
To ensure against data loss in the case of a crash, a
2199
2341
.B --backup-file
2200
2342
must be provided for these changes. Small sections of the array will
2201
be copied to the backup file while they are being rearranged.
2343
be copied to the backup file while they are being rearranged. This
2344
means that all the data is copied twice, once to the backup and once
2345
to the new layout on the array, so this type of reshape will go very
2203
2348
If the reshape is interrupted for any reason, this backup file must be
2205
2350
.B "mdadm --assemble"
2206
2351
so the array can be reassembled. Consequently the file cannot be
2207
2352
stored on the device being reshaped.