167
167
<div id="content">
168
168
<h1>Storage Management</h1>
170
This page describes the backends for the storage management capabilities in
170
Libvirt provides storage management on the physical host through
171
storage pools and volumes.
174
A storage pool is a quantity of storage set aside by an
175
administrator, often a dedicated storage administrator, for use
176
by virtual machines. Storage pools are divided into storage
177
volumes either by the storage administrator or the system
178
administrator, and the volumes are assigned to VMs as block
182
For example, the storage administrator responsible for an NFS
183
server creates a share to store virtual machines' data. The
184
system administrator defines a pool on the virtualization host
185
with the details of the share
186
(e.g. nfs.example.com:/path/to/share should be mounted on
187
/vm_data). When the pool is started, libvirt mounts the share
188
on the specified directory, just as if the system administrator
189
logged in and executed 'mount nfs.example.com:/path/to/share
190
/vmdata'. If the pool is configured to autostart, libvirt
191
ensures that the NFS share is mounted on the directory specified
192
when libvirt is started.
195
Once the pool is started, the files in the NFS share are
196
reported as volumes, and the storage volumes' paths may be
197
queried using the libvirt APIs. The volumes' paths can then be
198
copied into the section of a VM's XML definition describing the
199
source storage for the VM's block devices. In the case of NFS,
200
an application using the libvirt APIs can create and delete
201
volumes in the pool (files in the NFS share) up to the limit of
202
the size of the pool (the storage capacity of the share). Not
203
all pool types support creating and deleting volumes. Stopping
204
the pool (somewhat unfortunately referred to by virsh and the
205
API as "pool-destroy") undoes the start operation, in this case,
206
unmounting the NFS share. The data on the share is not modified
207
by the destroy operation, despite the name. See man virsh for
211
A second example is an iSCSI pool. A storage administrator
212
provisions an iSCSI target to present a set of LUNs to the host
213
running the VMs. When libvirt is configured to manage that
214
iSCSI target as a pool, libvirt will ensure that the host logs
215
into the iSCSI target and libvirt can then report the available
216
LUNs as storage volumes. The volumes' paths can be queried and
217
used in VM's XML definitions as in the NFS example. In this
218
case, the LUNs are defined on the iSCSI server, and libvirt
219
cannot create and delete volumes.
222
Storage pools and volumes are not required for the proper
223
operation of VMs. Pools and volumes provide a way for libvirt
224
to ensure that a particular piece of storage will be available
225
for a VM, but some administrators will prefer to manage their
226
own storage and VMs will operate properly without any pools or
227
volumes defined. On systems that do not use pools, system
228
administrators must ensure the availability of the VMs' storage
229
using whatever tools they prefer, for example, adding the NFS
230
share to the host's fstab so that the share is mounted at boot
234
If at this point the value of pools and volumes over traditional
235
system administration tools is unclear, note that one of the
236
features of libvirt is its remote protocol, so it's possible to
237
manage all aspects of a virtual machine's lifecycle as well as
238
the configuration of the resources required by the VM. These
239
operations can be performed on a remote host entirely within the
240
libvirt API. In other words, a management application using
241
libvirt can enable a user to perform all the required tasks for
242
configuring the host for a VM: allocating resources, running the
243
VM, shutting it down and deallocating the resources, without
244
requiring shell access or any other control channel.
247
Libvirt supports the following storage pool types:
174
250
<a href="#StorageBackendDir">Directory backend</a>