5
You are in a hurry, and you don’t want to read this man page. Ok, without
6
warranty, here are the commands to launch a shell inside a container with a
7
predefined configuration template, it may work.
8
lxc-execute -n foo -f /etc/lxc/lxc-macvlan.conf /bin/bash
11
The container technology is actively being pushed into the mainstream linux
12
kernel. It provides the resource management through the control groups aka
13
process containers and resource isolation through the namespaces.
15
The linux containers, lxc, aims to use these new functionnalities to pro-
16
vide an userspace container object which provides full resource isolation
17
and resource control for an applications or a system.
19
The first objective of this project is to make the life easier for the ker-
20
nel developers involved in the containers project and especially to con-
21
tinue working on the Checkpoint/Restart new features. The lxc is small
22
enough to easily manage a container with simple command lines and complete
23
enough to be used for other purposes.
26
The lxc relies on a set of functionnalies provided by the kernel which
27
needs to be active. Depending of the missing functionnalities the lxc will
28
work with a restricted number of functionnalities or will simply fails.
30
The following list gives the kernel features to be enabled in the kernel to
31
have the full features container:
34
* Control Group support
35
-> namespace cgroup subsystem
37
-> Group CPU scheduler
38
-> control group freeze subsystem
39
-> Basis for grouping tasks (Control Groups)
40
-> Simple CPU accounting
42
-> Memory resource controllers for Control Groups
50
-> Network namespace support
52
For the moment the easiest way to have all the features in the kernel is to
56
git://git.kernel.org/pub/scm/linux/kernel/git/daveh/linux-2.6-lxc.git
57
But the kernel version >= 2.6.27 shipped with the distros, may work with
58
lxc, this one will have less functionnalities but enough to be interesting.
59
The planned kernel version which lxc should be fully functionnaly is
62
Before using the lxc, your system should be configured with the file capa-
63
bilities, otherwise you will need to run the lxc commands as root. The con-
64
trol group should be mounted anywhere, eg: mount -t cgroup cgroup /cgroup
66
FUNCTIONAL SPECIFICATION
67
A container is an object where the configuration is persistent. The appli-
68
cation will be launched inside this container and it will use the configu-
69
ration which was previously created.
71
How to run an application in a container ?
73
Before running an application, you should know what are the resource you
74
want to isolate. The default configuration is isolation of the pids, the
75
sysv ipc and the mount points. If you want to run a simple shell inside a
76
container, a basic configuration is needed, especially if you want to share
77
the rootfs. If you want to run an application like sshd, you should provide
78
a new network stack and a new hostname. If you want to avoid conflicts with
79
some files eg. /var/run/httpd.pid, you should remount /var/run with an
80
empty directory. If you want to avoid the conflicts in all the cases, you
81
can specify a rootfs for the container. The rootfs can be a directory tree,
82
previously bind mounted with the initial rootfs, so you can still use your
83
distro but with your own /etc and /home
85
Here is an example of directory tree for sshd:
87
[root@lxc sshd]$ tree -d rootfs
111
and the mount points file associated with it:
113
[root@lxc sshd]$ cat fstab
115
/lib /home/root/sshd/rootfs/lib none ro,bind 0 0
116
/bin /home/root/sshd/rootfs/bin none ro,bind 0 0
117
/usr /home/root/sshd/rootfs/usr none ro,bind 0 0
118
/sbin /home/root/sshd/rootfs/sbin none ro,bind 0 0
120
How to run a system in a container ?
122
Running a system inside a container is paradoxically easier than running an
123
application. Why ? Because you don’t have to care about the resources to be
124
isolated, everything need to be isolated except /dev which needs to be
125
remounted in the container rootfs, the other resources are specified as
126
being isolated but without configuration because the container will set
127
them up. eg. the ipv4 address will be setup by the system container init
128
scripts. Here is an example of the mount points file:
130
[root@lxc debian]$ cat fstab
132
/dev /home/root/debian/rootfs/dev none bind 0 0
133
/dev/pts /home/root/debian/rootfs/dev/pts none bind 0 0
135
A good idea to have the console is to mount bind this one to our tty, so we
136
see the output of the system container booting and we can log to it
138
/proc/self/fd/0 /home/root/debian/rootfs/dev/console none bind 0 0
140
These examples are avaible in the contrib package located at:
141
https://sourceforge.net/projects/lxc/
144
When the container is created, it contains the configuration information.
145
When a process is launched, the container will be starting and running.
146
When the last process running inside the container exits, the container is
149
In case of failure when the container is initialized, it will pass through
153
| STOPPED |<---------------
160
| STARTING |--error- |
164
--------- ---------- |
165
| RUNNING | | ABORTING | |
166
--------- ---------- |
172
| STOPPING |<------- |
175
---------------------
178
The container is configured through a configuration file, the format of the
179
configuration file is described in lxc.conf(5)
181
CREATING / DESTROYING THE CONTAINERS
182
The container is created via the lxc-create command. It takes a container
183
name as parameter and an optional configuration file. The name is used by
184
the different commands to refer to this container. The lxc-destroy command
185
will destroy the container object.
190
STARTING / STOPPING A CONTAINER
191
When the container has been created, it is ready to run an application /
192
system. When the application has to be destroyed the container can be
193
stopped, that will kill all the processes of the container.
195
Running an application inside a container is not exactly the same thing as
196
running a system. For this reason, there is two commands to run an applica-
197
tion into a container:
199
lxc-execute -n foo [-f config] /bin/bash
200
lxc-start -n foo [/bin/bash]
202
lxc-execute command will run the specified command into a container but it
203
will mount /proc and autocreate/autodestroy the container if it does not
204
exist. It will furthermore create an intermediate process, lxc-init, which
205
is in charge to launch the specified command, that allows to support dae-
206
mons in the container. In other words, in the container lxc-init has the
207
pid 1 and the first process of the application has the pid 2.
209
lxc-start command will run the specified command into the container doing
210
nothing else than using the configuration specified by lxc-create. The pid
211
of the first process is 1. If no command is specified lxc-start will run
214
To summarize, lxc-execute is for running an application and lxc-start is
215
for running a system.
217
If the application is no longer responding, inaccessible and is not able to
218
finish by itself, a wild lxc-stop command will kill all the processes in
219
the container without pity.
222
FREEZE / UNFREEZE A CONTAINER
223
Sometime, it is useful to stop all the processes belonging to a container,
224
eg. for job scheduling. The commands:
228
will put all the processes in an ininteruptible state and
232
will resume all the tasks.
234
This feature is enabled if the cgroup freezer is enabled in the kernel.
236
GETTING INFORMATION ABOUT THE CONTAINER
237
When there are a lot of containers, it is hard to follow what has been cre-
238
ated or destroyed, what is running or what are the pids running into a spe-
239
cific container. For this reason, the following commands give this informa-
246
lxc-ls list the containers of the system. The command is a script built on
247
top of ls, so it accepts the options of the ls commands, eg:
251
will display the containers list in one column or:
255
will display the containers list and their permissions.
257
lxc-ps will display the pids for a specific container. Like lxc-ls, lxc-ps
258
is built on top of ps and accepts the same options, eg:
260
lxc-ps -n foo --forest
262
will display the process hierarchy for the container ’foo’.
264
lxc-info gives informations for a specific container, at present only the
265
state of the container is displayed.
267
Here is an example on how the combination of these commands allow to list
268
all the containers and retrieve their state.
270
for i in $(lxc-ls -1); do
274
And displaying all the pids of all the containers:
275
for i in $(lxc-ls -1); do
279
And displaying all the pids of all the containers:
281
for i in $(lxc-ls -1); do
282
lxc-ps -n $i --forest
285
MONITORING THE CONTAINERS
286
It is sometime useful to track the states of a container, for example to
287
monitor it or just to wait for a specific state in a script.
289
lxc-monitor command will monitor one or several containers. The parameter
290
of this command accept a regular expression for example:
292
lxc-monitor -n "foo|bar"
294
will monitor the states of containers named ’foo’ and ’bar’, and:
298
will monitor all the containers.
300
SETTING THE CONTROL GROUP FOR A CONTAINER
301
The container is tied with the control groups. A control group can be setup
302
when the container is running to change or to retrieve its value.
304
lxc-cgroup command is used to set or get a control group subsystem which is
305
associated with a container. The subsystem name is handle by the user, the
306
command won’t do any syntax checking on name, if the name does not exists,
307
the command will fail.
309
lxc-cgroup -n foo cpuset.cpus
311
will display the content of this subsystem.
313
lxc-cgroup -n foo cpus.share 512
315
will set the subsystem to the specified value.
318
The lxc is still in development, so the command syntax and the API can
319
change. The version 1.0.0 will be the frozen version.
322
lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-execute(1), lxc-stop(1),
323
lxc-monitor(1), lxc-wait(1), lxc-cgroup(1), lxc-ls(1), lxc-ps(1), lxc-
324
info(1), lxc-freeze(1), lxc-unfreeze(1), lxc.conf(5),
1
Please see the COPYING file for details on copying and usage.
2
Please refer to the INSTALL file for instructions on how to build.
6
The container technology is actively being pushed into the mainstream linux
7
kernel. It provides the resource management through the control groups aka
8
process containers and resource isolation through the namespaces.
10
The linux containers, lxc, aims to use these new functionnalities to pro-
11
vide an userspace container object which provides full resource isolation
12
and resource control for an applications or a system.
14
The first objective of this project is to make the life easier for the ker-
15
nel developers involved in the containers project and especially to con-
16
tinue working on the Checkpoint/Restart new features. The lxc is small
17
enough to easily manage a container with simple command lines and complete
18
enough to be used for other purposes.
22
Refer the lxc* man pages (generated from doc/* files)
24
Downloading the current source code:
26
Source for the latest released version can always be downloaded from
27
http://lxc.sourceforge.net/download/lxc
29
You can browse the up to the minute source code and change history online.
30
http://lxc.git.sourceforge.net
32
For detailed build instruction refer to INSTALL and man lxc man page
33
but a short command line should work:
34
./configure && make && sudo make install && sudo lxc-setcap
35
preceded by ./autogen.sh if configure do not exist yet.
39
when you find you need help, you can check out one of the two
40
lxc mailing list archives and register if interested:
41
https://lists.sourceforge.net/lists/listinfo/lxc-devel
42
https://lists.sourceforge.net/lists/listinfo/lxc-users
46
lxc is still in development, so the command syntax and the API can
47
change. The version 1.0.0 will be the frozen version.
49
lxc is developed and tested on Linux since kernel mainline version 2.6.27
50
(without network) and 2.6.29 with network isolation.
51
is compiled with gcc, and supports i686, x86_64, ppc, ppc64, S390 archi.
327
54
Daniel Lezcano <daniel.lezcano@free.fr>