3
3
<h1>Quick Start Administrator Guide</h1>
5
Please see the <a href="quickstart.html">Quick Start User Guide</a> for a general
5
Please see the <a href="quickstart.html">Quick Start User Guide</a> for a
8
8
<h2>Super Quick Start</h2>
10
10
<li>Make sure that you have synchronized clocks plus consistent users and groups
11
across the cluster.</li>
11
(UIDs and GIDs) across the cluster.</li>
12
<li>Install <a href="http://home/gna.org/munge">MUNGE</a> for
13
authentication. Make sure that all nodes in your cluster have the
14
same <i>munge.key</i>. Make sure the MUNGE daemon, <i>munged</i>
15
is started before you start the SLURM daemons.</li>
12
16
<li>bunzip2 the distributed tar-ball and untar the files:<br>
13
17
<i>tar --bzip -x -f slurm*tar.bz2</i></li>
14
18
<li><i>cd</i> to the directory containing the SLURM source and type
15
19
<i>./configure</i> with appropriate options, typically <i>--prefix=</i>
16
20
and <i>--sysconfdir=</i></li>
17
21
<li>Type <i>make</i> to compile SLURM.</li>
18
<li>Type <i>make install</i> to install the programs, documentation, libaries,
22
<li>Type <i>make install</i> to install the programs, documentation, libraries,
19
23
header files, etc.</li>
20
24
<li>Build a configuration file using your favorite web browser and
21
25
<i>doc/html/configurator.html</i>.<br>
27
31
starting SLURM daemons.</li>
28
32
<li>Install the configuration file in <i><sysconfdir>/slurm.conf</i>.<br>
29
33
NOTE: You will need to install this configuration file on all nodes of the cluster.</li>
30
<li>Create OpenSSL keys:<br>
31
<i>openssl genrsa -out <sysconfdir>/slurm.key 1024</i><br>
32
<i>openssl rsa -in <sysconfdir>/slurm.key -pubout -out <sysconfdir>/slurm.cert</i><br>
33
NOTE: You will build the OpenSSL key files on one node and distribute <i>slurm.cert</i>
34
to all of the nodes in the cluster. <i>slurm.key</i> must be readable only by
35
<i>SlurmUser<i> and is only needed where the <i>slurmctld</i> (SLURM controller
36
daemon) executes, typically just a couple of nodes.</li>
37
34
<li>Start the <i>slurmctld</i> and <i>slurmd</i> daemons.</li>
39
<p>NOTE: Items 2 through 5 can be replaced with</p>
36
<p>NOTE: Items 3 through 6 can be replaced with</p>
41
38
<li><i>rpmbuild -ta slurm*.tar.bz2</i></li>
42
39
<li><i>rpm --install <the rpm files></i></li>
45
<h2>Building and Installing</h2>
42
<h2>Building and Installing SLURM</h2>
47
44
<p>Instructions to build and install SLURM manually are shown below.
48
45
See the README and INSTALL files in the source distribution for more details.
51
48
<li>bunzip2 the distributed tar-ball and untar the files:</br>
52
49
<i>tar --bzip -x -f slurm*tar.bz2</i>
53
50
<li><i>cd</i> to the directory containing the SLURM source and type
54
<i>./configure</i> with appropriate options.</li>
51
<i>./configure</i> with appropriate options (see below).</li>
55
52
<li>Type <i>make</i> to compile SLURM.</li>
56
<li>Type <i>make install</i> to install the programs, documentation, libaries,
53
<li>Type <i>make install</i> to install the programs, documentation, libraries,
57
54
header files, etc.</li>
59
<p>The most commonly used arguments to the <span class="commandline">configure</span>
56
<p>A full list of <i>configure</i> options will be returned by the command
57
<i>configure --help</i>. The most commonly used arguments to the
58
<i>configure</i> command include: </p>
61
59
<p style="margin-left:.2in"><span class="commandline">--enable-debug</span><br>
62
60
Enable additional debugging logic within SLURM.</p>
63
61
<p style="margin-left:.2in"><span class="commandline">--prefix=<i>PREFIX</i></span><br>
70
68
<p>If required libraries or header files are in non-standard locations,
71
69
set CFLAGS and LDFLAGS environment variables accordingly.
72
Type <i>configure --help</i> for a more complete description of options.
73
70
Optional SLURM plugins will be built automatically when the
74
71
<span class="commandline">configure</span> script detects that the required
75
72
build requirements are present. Build dependencies for various plugins
76
73
and commands are denoted below.
79
<li> <b>Munge</b> The auth/munge plugin will be built if the Munge authentication
80
library is installed. </li>
76
<li> <b>MUNGE</b> The auth/munge plugin will be built if the MUNGE authentication
77
library is installed. MUNGE is used as the default
78
authentication mechanism.</li>
81
79
<li> <b>Authd</b> The auth/authd plugin will be built and installed if
82
80
the libauth library and its dependency libe are installed.
84
82
<li> <b>Federation</b> The switch/federation plugin will be built and installed
85
if the IBM Federation switch libary is installed.
83
if the IBM Federation switch library is installed.
86
84
<li> <b>QsNet</b> support in the form of the switch/elan plugin requires
87
85
that the qsnetlibs package (from Quadrics) be installed along
88
86
with its development counterpart (i.e. the qsnetheaders
172
170
files must be readable; the log file directory and state save directory
173
171
must be writable).</p>
175
<p>The <b>slurmd</b> daemon executes on every compute node. It resembles a remote
176
shell daemon to export control to SLURM. Because slurmd initiates and manages
177
user jobs, it must execute as the user root.</p>
179
<p><b>slurmctld</b> and/or <b>slurmd</b> should be initiated at node startup time
180
per the SLURM configuration.
173
<p>The <b>slurmd</b> daemon executes on every compute node. It resembles a
174
remote shell daemon to export control to SLURM. Because slurmd initiates and
175
manages user jobs, it must execute as the user root.</p>
177
<p>If you want to archive job accounting records to a database, the
178
<b>slurmdbd</b> (SLURM DataBase Daemon) should be used. We recommend that
179
you defer adding accounting support until after basic SLURM functionality has
180
established you your system. An <a href="accounting.html">Accounting</a> web
181
page contains more information.</p>
183
<p><b>slurmctld</b> and/or <b>slurmd</b> should be initiated at node startup
184
time per the SLURM configuration.
181
185
A file <b>etc/init.d/slurm</b> is provided for this purpose.
182
186
This script accepts commands <b>start</b>, <b>startclean</b> (ignores
183
187
all saved state), <b>restart</b>, and <b>stop</b>.</p>
185
189
<h2>Infrastructure</h2>
186
190
<h3>User and Group Identification</h3>
187
<p>There must be a uniform user and group name space across the
191
<p>There must be a uniform user and group name space (including
192
UIDs and GIDs) across the cluster.
189
193
It is not necessary to permit user logins to the control hosts
190
194
(<b>ControlMachine</b> or <b>BackupController</b>), but the
191
195
users and groups must be configured on those hosts.</p>
197
201
configuration file. Currently available authentication types include
198
202
<a href="http://www.theether.org/authd/">authd</a>,
199
203
<a href="http://home.gna.org/munge/">munge</a>, and none.
200
The default authentication infrastructure is "none". This permits any user to execute
201
any job as another user. This may be fine for testing purposes, but certainly not for production
202
use. <b>Configure some AuthType value other than "none" if you want any security.</b>
203
We recommend the use of Munge unless you are experienced with authd.
204
The default authentication infrastructure is "munge", but this does
205
require the installation of the MUNGE package.
206
An authentication type of "none" requires no infrastructure, but permits
207
any user to execute any job as another user with limited programming effort.
208
This may be fine for testing purposes, but certainly not for production use.
209
<b>Configure some AuthType value other than "none" if you want any security.</b>
210
We recommend the use of MUNGE unless you are experienced with authd.
211
If using MUNGE, all nodes in the cluster must be configured with the
212
same <i>munge.key</i> file.
213
The MUNGE daemon, <i>munged</i>, must also be started before SLURM daemons.</p>
205
215
<p>While SLURM itself does not rely upon synchronized clocks on all nodes
206
216
of a cluster for proper operation, its underlying authentication mechanism
207
may have this requirement. For instance, if SLURM is making use of the
208
auth/munge plugin for communication, the clocks on all nodes will need to
209
be synchronized. </p>
217
do have this requirement.</p>
211
219
<h3>MPI support</h3>
212
220
<p>SLURM supports many different SLURM implementations.
213
221
For more information, see <a href="quickstart.html#mpi">MPI</a>.
215
223
<h3>Scheduler support</h3>
216
<p>The scheduler used by SLURM is controlled by the <b>SchedType</b> configuration
217
parameter. This is meant to control the relative importance of pending jobs and
218
several options are available
219
SLURM's default scheduler is <u>FIFO (First-In First-Out)</u>.
220
SLURM offers a backfill scheduling plugin.
221
<u>Backfill scheduling</u> will initiate a lower-priority jobs
222
if doing so does not delay the expected initiation time of higher priority jobs;
223
essentially using smaller jobs to fill holes in the resource allocation plan.
224
Effective backfill scheduling does require users to specify job time limits.
225
SLURM offers a <u>gang scheduler</u>, which time-slices jobs in the same partition/queue
226
and can be used to preempt jobs from lower-priority queues in order to execute
227
jobs in higher priority queues.
228
SLURM also supports a plugin for use of
224
<p>SLURM can be configured with rather simple or quite sophisticated
225
scheduling algorithms depending upon your needs and willingness to
226
manage the configuration (much of which requires a database).
227
The first configuration parameter of interest is <b>PriorityType</b>
228
with two options available: <i>basic</i> (first-in-first-out) and
230
The <i>multifactor</i> plugin will assign a priority to jobs based upon
231
a multitude of configuration parameters (age, size, fair-share allocation,
232
etc.) and its details are beyond the scope of this document.
233
See the <a href="priority_multifactor.html">Multifactor Job Priority Plugin</a>
234
document for details.</p>
236
<p>The <b>SchedType</b> configuration parameter controls how queued
237
jobs are scheduled and several options are available.
239
<li><i>builtin</i> will initiate jobs strictly in their priority order,
240
typically (first-in-first-out) </li>
241
<li><i>backfill</i> will initiate a lower-priority job if doing so does
242
not delay the expected initiation time of higher priority jobs; essentially
243
using smaller jobs to fill holes in the resource allocation plan. Effective
244
backfill scheduling does require users to specify job time limits.</li>
245
<li><i>gang</i> time-slices jobs in the same partition/queue and can be
246
used to preempt jobs from lower-priority queues in order to execute
247
jobs in higher priority queues.</li>
248
<li><i>wiki</i> is an interface for use with
229
249
<a href="http://www.clusterresources.com/pages/products/maui-cluster-scheduler.php">
230
The Maui Scheduler</a> or
250
The Maui Scheduler</a></li>
251
<li><i>wiki2</i> is an interface for use with the
231
252
<a href="http://www.clusterresources.com/pages/products/moab-cluster-suite.php">
232
Moab Cluster Suite</a> which offer sophisticated scheduling algorithms.
233
For more information about these options see
253
Moab Cluster Suite</a>
256
<p>For more information about scheduling options see
234
257
<a href="gang_scheduling.html">Gang Scheduling</a>,
235
<a href="preempt.html">Preemption</a> and
258
<a href="preempt.html">Preemption</a>,
259
<a href="reservations.html">Resource Reservation Guide</a>,
260
<a href="resource_limits.html">Resource Limits</a> and
236
261
<a href="cons_res_share.html">Sharing Consumable Resources</a>.</p>
238
<h3>Node selection</h3>
239
<p>The node selection mechanism used by SLURM is controlled by the
263
<h3>Resource selection</h3>
264
<p>The resource selection mechanism used by SLURM is controlled by the
240
265
<b>SelectType</b> configuration parameter.
241
266
If you want to execute multiple jobs per node, but apportion the processors,
242
267
memory and other resources, the <i>cons_res</i> (consumable resources)
413
438
<h2>Security</h2>
414
<p>The use of <a href="http://www.openssl.org/">OpenSSL</a> is
415
recommended to provide a digital signature on job step credentials.
416
<a href="http://home.gna.org/munge/">Munge</a> can alternately
417
be used with somewhat slower performance.
439
<p>Besides authentication of SLURM communications based upon the value
440
of the <b>AuthType</b>, digital signatures are used in job step
418
442
This signature is used by <i>slurmctld</i> to construct a job step
419
443
credential, which is sent to <i>srun</i> and then forwarded to
420
444
<i>slurmd</i> to initiate job steps.
421
445
This design offers improved performance by removing much of the
422
446
job step initiation overhead from the <i> slurmctld </i> daemon.
423
The mechanism to be used is controlled through the <b>CryptoType</b>
424
configuration parameter (newly added in SLURM version 1.3,
425
earlier versions always use OpenSSL).</p>
447
The digital signature mechanism is specified by the <b>CryptoType</b>
448
configuration parameter and the default mechanism is MUNGE. </p>
428
<p>If using OpenSSL digital signatures, unique job credential keys
429
must be created for your site using the program
451
<p>If using <a href="http://www.openssl.org/">OpenSSL</a> digital signatures,
452
unique job credential keys must be created for your site using the program
430
453
<a href="http://www.openssl.org/">openssl</a>.
431
454
<b>You must use openssl and not ssh-genkey to construct these keys.</b>
432
455
An example of how to do this is shown below. Specify file names that
447
470
<i>openssl rsa -in <sysconfdir>/slurm.key -pubout -out <sysconfdir>/slurm.cert</i>
451
<p>If using Munge digital signatures, no SLURM keys are required.
452
This will be address in the installation and configuration of Munge.</p>
474
<p>If using MUNGE digital signatures, no SLURM keys are required.
475
This will be address in the installation and configuration of MUNGE.</p>
454
477
<h3>Authentication</h3>
455
478
<p>Authentication of communications (identifying who generated a particular
456
479
message) between SLURM components can use a different security mechanism
457
480
that is configurable.
458
You must specify one "auth" plugin for this purpose (<b>AuthType</b>.
481
You must specify one "auth" plugin for this purpose using the
482
<b>AuthType</b> configuration parameter.
459
483
Currently, only three authentication plugins are supported:
460
484
<b>auth/none</b>, <b>auth/authd</b>, and <b>auth/munge</b>.
461
The auth/none plugin is built and used by default, but either
485
The auth/none plugin is built by default, but either
462
486
Brent Chun's <a href="http://www.theether.org/authd/">authd</a>,
463
or LLNL's <a href="http://home.gna.org/munge/">munge</a>
487
or LLNL's <a href="http://home.gna.org/munge/">MUNGE</a>
464
488
should be installed in order to get properly authenticated communications.
465
Unless you are experience with authd, we recommend the use of munge.
489
Unless you are experience with authd, we recommend the use of MUNGE.
466
490
The configure script in the top-level directory of this distribution will
467
491
determine which authentication plugins may be built.
468
492
The configuration file specifies which of the available plugins will be utilized. </p>