1
1
This file describes changes in recent versions of SLURM. It primarily
2
2
documents those changes that are of interest to users and admins.
4
* Changes in SLURM 2.2.4
5
========================
6
-- For batch jobs for which the Prolog fails, substitute the job ID for any
7
"%j" in the job's output or error file specification.
8
-- Add licenses field to the sview reservation information.
9
-- BLUEGENE - Fix for handling extremely overloaded system on Dynamic system
10
dealing with starting jobs on overlapping blocks. Previous fallout
11
was job would be requeued. (happens very rarely)
12
-- In accounting_storage/filetxt plugin, substitute spaces within job names,
13
step names, and account names with an underscore to insure proper parsing.
14
-- When building contribs/perlapi ignore both INSTALL_BASE and PERL_MM_OPT.
15
Use PREFIX instead to avoid build errors from multiple installation
17
-- Add job_submit/cnode plugin to support resource reservations of less than
18
a full midplane on BlueGene computers. Treat cnodes as liceses which can
19
be reserved and are consumed by jobs. This reservation mechanism for less
20
than an entire midplane is still under development.
21
-- Clear a job's "reason" field when a held job is released.
22
-- When releasing a held job, calculate a new priority for it rather than
23
just setting the priority to 1.
24
-- Fix for sview started on a non-bluegene system to pick colors correctly
25
when talking to a real bluegene system.
26
-- Improve sched/backfill's expected start time calculation.
27
-- Prevent abort of sacctmgr for dump command with invalid (or no) filename.
28
-- Improve handling of job updates when using limits in accounting, and
29
updating jobs as a non-admin user.
30
-- Fix for "squeue --states=all" option. Bug would show no jobs.
31
-- Schedule jobs with reservations before those without reservations.
32
-- Fix squeue/scancel to query correctly against accounts of different case.
33
-- Abort an srun command when it's associated job gets aborted due to a
34
dependency that can not be satisfied.
35
-- In jobcomp plugins, report start time of zeroif pending job is cancelled.
36
Previously may report expected start time.
37
-- Fixed sacctmgr man to state correct variables.
38
-- Select nodes based upon their Weight when job allocation requests include
39
a constraint field with a count (e.g. "srun --constraint=gpu*2 -N4 a.out").
40
-- Add support for user names that are entirely numeric and do not treat them
41
as UID values. Patch from Dennis Leepow.
42
-- Patch to un/pack double values properly if negative value. Patch from
44
-- Do not reset a job's priority when requeued or suspended.
45
-- Fix problemm that could let new jobs start on a node in DRAINED state.
46
-- Fix cosmetic sacctmgr issue where if the user you are trying to add
47
doesn't exist in the /etc/passwd file and the account you are trying
48
to add them to doesn't exist it would print (null) instead of the bad
50
-- Fix associations/qos for when adding back a previously deleted object
51
the object will be cleared of all old limits.
52
-- BLUEGENE - Added back a lock when creating dynamic blocks to be more thread
53
safe on larger systems with heavy load.
55
* Changes in SLURM 2.2.3
56
========================
57
-- Update srun, salloc, and sbatch man page description of --distribution
58
option. Patches from Rod Schulz, Bull.
59
-- Applied patch from Martin Perry to fix "Incorrect results for task/affinity
60
block second distribution and cpus-per-task > 1" bug.
61
-- Avoid setting a job's eligible time while held (priority == 0).
62
-- Substantial performance improvement to backfill scheduling. Patch from
63
Bjorn-Helge Mevik, University of Oslo.
64
-- Make timeout for communications to the slurmctld be based upon the
65
MessageTimeout configuration parameter rather than always 3 seconds.
66
Patch from Matthieu Hautreux, CEA.
67
-- Add new scontrol option of "show aliases" to report every NodeName that is
68
associated with a given NodeHostName when running multiple slurmd daemons
69
per compute node (typically used for testing purposes). Patch from
70
Matthieu Hautreux, CEA.
71
-- Fix for handling job names with a "'" in the name within MySQL accounting.
72
Patch from Gerrit Renker, CSCS.
73
-- Modify condition under which salloc execution delayed until moved to the
74
foreground. Patch from Gerrit Renker, CSCS.
75
Job control for interactive salloc sessions: only if ...
76
a) input is from a terminal (stdin has valid termios attributes),
77
b) controlling terminal exists (non-negative tpgid),
78
c) salloc is not run in allocation-only (--no-shell) mode,
79
d) salloc runs in its own process group (true in interactive
80
shells that support job control),
81
e) salloc has been configured at compile-time to support background
82
execution and is not currently in the background process group.
83
-- Abort salloc if no controlling terminal and --no-shell option is not used
84
("setsid salloc ..." is disabled). Patch from Gerrit Renker, CSCS.
85
-- Fix to gang scheduling logic which could cause jobs to not be suspended
86
or resumed when appropriate.
87
-- Applied patch from Martin Perry to fix "Slurmd abort when using task
88
affinity with plane distribution" bug.
89
-- Applied patch from Yiannis Georgiou to fix "Problem with cpu binding to
90
sockets option" behaviour. This change causes "--cpu_bind=sockets" to bind
91
tasks only to the CPUs on each socket allocated to the job rather than all
93
-- Advance daily or weekly reservations immediately after termination to avoid
94
having a job start that runs into the reservation when later advanced.
95
-- Fix for enabling users to change there own default account, wckey, or QOS.
96
-- BLUEGENE - If using OVERLAP mode fixed issue with multiple overlapping
98
-- Fix for sacctmgr to display correctly default accounts.
99
-- scancel -s SIGKILL will always sent the RPC to the slurmctld rather than
100
the slurmd daemon(s). This insures that tasks in the process of getting
102
-- BLUEGENE - If using OVERLAP mode fixed issue with jobs getting denied
103
at submit if the only option for their job was overlapping a block in
106
* Changes in SLURM 2.2.2
107
========================
108
-- Correct logic to set correct job hold state (admin or user) when setting
109
the job's priority using scontrol's "update jobid=..." rather than its
110
"hold" or "holdu" commands.
111
-- Modify squeue to report unset --mincores, --minthreads or --extra-node-info
112
values as "*" rather than 65534. Patch from Rod Schulz, BULL.
113
-- Report the StartTime of a job as "Unknown" rather than the year 2106 if its
114
expected start time was too far in the future for the backfill scheduler
116
-- Prevent a pending job reason field from inappropriately being set to
118
-- In sched/backfill with jobs having QOS_FLAG_NO_RESERVE set, then don't
119
consider the job's time limit when attempting to backfill schedule. The job
120
will just be preempted as needed at any time.
121
-- Eliminated a bug in sbatch when no valid target clusters are specified.
122
-- When explicitly sending a signal to a job with the scancel command and that
123
job is in a pending state, then send the request directly to the slurmctld
124
daemon and do not attempt to send the request to slurmd daemons, which are
125
not running the job anyway.
126
-- In slurmctld, properly set the up_node_bitmap when setting it's state to
127
IDLE (in case the previous node state was DOWN).
128
-- Fix smap to process block midplane names correctly when on a bluegene
130
-- Fix smap to once again print out the Letter 'ID' for each line of a block/
132
-- Corrected the NOTES section of the scancel man page
133
-- Fix for accounting_storage/mysql plugin to correctly query cluster based
135
-- Fix issue when updating database for clusters that were previously deleted
136
before upgrade to 2.2 database.
137
-- BLUEGENE - Handle mesh torus check better in dynamic mode.
138
-- BLUEGENE - Fixed race condition when freeing block, most likely only would
140
-- Fix for calculating used QOS limits correctly on a slurmctld reconfig.
141
-- BLUEGENE - Fix for bad conn-type set when running small blocks in HTC mode.
142
-- If salloc's --no-shell option is used, then do not attempt to preserve the
144
-- Add new SLURM configure time parameter of --disable-salloc-background. If
145
set, then salloc can only execute in the foreground. If started in the
146
background, then a message will be printed and the job allocation halted
147
until brought into the foreground.
148
NOTE: THIS IS A CHANGE IN DEFAULT SALLOC BEHAVIOR FROM V2.2.1, BUT IS
149
CONSISTENT WITH V2.1 AND EARLIER.
150
-- Added the Multi-Cluster Operation web page.
151
-- Removed remnant code for enforcing max sockets/cores/threads in the
152
cons_res plugin (see last item in 2.1.0-pre5). This was responsible
153
for a bug reported by Rod Schultz.
154
-- BLUEGENE - Set correct env vars for HTC mode on a P system to get correct
156
-- Correct RunTime reported by "scontrol show job" for pending jobs.
158
* Changes in SLURM 2.2.1
159
========================
160
-- Fix setting derived exit code correctly for jobs that happen to have the
162
-- Better checking for time overflow when rolling up in accounting.
163
-- Add scancel --reservation option to cancel all jobs associated with a
164
specific reservation.
165
-- Treat reservation with no nodes like one that starts later (let jobs of any
166
size get queued and do not block any pending jobs).
167
-- Fix bug in gang scheduling logic that would temporarily resume to many jobs
168
after a job completed.
169
-- Change srun message about job step being deferred due to SlurmctldProlog
170
running to be more clear and only print when --verbose option is used.
171
-- Made it so you could remove the hold on jobs with sview by setting the
172
priority to infinite.
173
-- BLUEGENE - better checking small blocks in dynamic mode whether a full
174
midplane job could run or not.
175
-- Decrease the maximum sleep time between srun job step creation retry
176
attempts from 60 seconds to 29 seconds. This should eliminate a possible
177
synchronization problem with gang scheduling that could result in job
178
step creation requests only occuring when a job is suspended.
179
-- Fix to prevent changing a held job's state from HELD to DEPENDENCY
180
until the job is released. Patch from Rod Schultz, Bull.
181
-- Fixed sprio -M to reflect PriorityWeight values from remote cluster.
182
-- Fix bug in sview when trying to update arbitrary field on more than one
183
job. Formerly would display information about one job, but update next
185
-- Made it so QOS with UsageFactor set to 0 would make it so jobs running
186
under that QOS wouldn't add time to fairshare or association/qos
188
-- Fixed issue where QOS priority wasn't re-normalized until a slurmctld
189
restart when a QOS priority was changed.
190
-- Fix sprio to use calculated numbers from slurmctld instead of
191
calulating it own numbers.
192
-- BLUEGENE - fixed race condition with preemption where if the wind blows the
193
right way the slurmctld could lock up when preempting jobs to run others.
194
-- BLUEGENE - fixed epilog to wait until MMCS job is totally complete before
196
-- BLUEGENE - more robust checking for states when freeing blocks.
197
-- Added correct files to the slurm.spec file for correct perl api rpm
199
-- Added flag "NoReserve" to a QOS to make it so all jobs are created equal
200
within a QOS. So if larger, higher priority jobs are unable to run they
201
don't prevent smaller jobs from running even if running the smaller
202
jobs delay the start of the larger, higher priority jobs.
203
-- BLUEGENE - Check preemptees one by one to preempt lower priority jobs first
204
instead of first fit.
205
-- In select/cons_res, correct handling of the option
206
SelectTypeParameters=CR_ONE_TASK_PER_CORE.
207
-- Fix for checking QOS to override partition limits, previously if not using
208
QOS some limits would be overlooked.
209
-- Fix bug which would terminate a job step if any of the nodes allocated to
210
it were removed from the job's allocation. Now only the tasks on those
211
nodes are terminated.
212
-- Fixed issue when using a storage_accounting plugin directly without the
213
slurmDBD updates weren't always sent correctly to the slurmctld, appears to
214
OS dependent, reported by Fredrik Tegenfeldt.
216
* Changes in SLURM 2.2.0
217
========================
218
-- Change format of Duration field in "scontrol show reservation" output from
219
an integer number of minutes to "[days-]hours:minutes:seconds".
220
-- Add support for changing the reservation of pending or running jobs.
221
-- On Cray systems only, salloc sends SIGKILL to spawned process group when
222
job allocation is revoked. Patch from Gerrit Renker, CSCS.
223
-- Fix for sacctmgr to work correctly when modifying user associations where
224
all the associations contain a partition.
225
-- Minor mods to salloc signal handling logic: forwards more signals and
226
releases allocation on real-time signals. Patch from Gerrit Renker, CSCS.
227
-- Add salloc logic to preserve tty attributes after abnormal exit. Patch
228
from Mark Grondona, LLNL.
229
-- BLUEGENE - Fix for issue in dynamic mode when trying to create a block
230
overlapping a block with no job running on it but in configuring state.
231
-- BLUEGENE - Speedup by skipping blocks that are deallocating for other jobs
232
when starting overlapping jobs in dynamic mode.
233
-- Fix for sacct --state to work correctly when not specifying a start time.
234
-- Fix upgrade process in accounting from 2.1 for clusters named "cluster".
235
-- Export more jobacct_common symbols needed for the slurm api on some systems.
237
* Changes in SLURM 2.2.0.rc4
238
============================
239
-- Correction in logic to spread out over time highly parallel messages to
240
minimize lost messages. Effects slurmd epilog complete messages and PMI
241
key-pair transmissions. Patch from Gerrit Renker, CSCS.
242
-- Fixed issue where if a system has unset messages to the dbd in 2.1 and
243
upgrades to 2.2. Messages are now processed correctly now.
244
-- Fixed issue where assoc_mgr cache wasn't always loaded correctly if the
245
slurmdbd wasn't running when the slurmctld was started.
246
-- Make sure on a pthread create in step launch that the error code is looked
247
at. Improves fault-tolerance of slurmd.
248
-- Fix setting up default acct/wckey when upgrading from 2.1 to 2.2.
249
-- Fix issue with associations attached to a specific partition with no other
250
association, and requesting a different partition.
251
-- Added perlapi to the slurmdb to the slurm.spec.
252
-- In sched/backfill, correct handling of CompleteWait parameter to avoid
253
backfill scheduling while a job is completing. Patch from Gerrit Renker,
255
-- Send message back to user when trying to launch job on computing lacking
256
that user ID. Patch from Hongjia Cao, NUDT.
257
-- BLUEGENE - Fix it so 1 midplane clusters will run small block jobs.
258
-- Add Command and WorkDir to the output of "scontrol show job" for job
259
allocations created using srun (not just sbatch).
260
-- Fixed sacctmgr to not add blank defaultqos' when doing a cluster dump.
261
-- Correct processing of memory and disk space specifications in the salloc,
262
sbatch, and srun commands to work properly with a suffix of "MB", "GB",
263
etc. and not only with a single letter (e.g. "M", "G", etc.).
264
-- Prevent nodes with suspended jobs from being powered down by SLURM.
265
-- Normalized the way pidfile are created by the slurm daemons.
266
-- Fixed modifying the root association to no read in it's last value
267
when clearing a limit being set.
268
-- Revert some resent signal handling logic from salloc so that SIGHUP sent
269
after the job allocation will properly release the allocation and cause
271
-- BLUEGENE - Fix for recreating a block in a ready state.
272
-- Fix debug flags for incorrect logic when dealing with DEBUG_FLAG_WIKI.
273
-- Report reservation's Nodes as a hostlist expression of all nodes rather
275
-- Fix reporting of nodes in BlueGene reservation (was reporting CPU count
276
rather than cnode count in scontrol output for NodeCnt field).
278
* Changes in SLURM 2.2.0.rc3
279
============================
280
-- Modify sacctmgr command to accept plural versions of options (e.g. "Users"
281
in addition to "User"). Patch from Don Albert, BULL.
282
-- BLUEGENE - make it so reset of boot counter happens only on state change
283
and not when a new job comes along.
284
-- Modify srun and salloc signal handling so they can be interrupted while
285
waiting for an allocation. This was broken in version 2.2.0.rc2.
286
-- Fix NULL pointer reference in sview. Patch from Gerrit Renker, CSCS.
287
-- Fix file descriptor leak in slurmstepd on spank_task_post_fork() failure.
288
Patch from Gerrit Renker, CSCS.
289
-- Fix bug in preserving job state information when upgrading from SLURM
290
version 2.1. Bug introduced in version 2.2.0-pre10. Patch from Par
292
-- Fix bug where if using the slurmdbd if a job wasn't able to start right
293
away some accounting information may be lost.
294
-- BLUEGENE - when a prolog failure happens the offending block is put in
296
-- Changed the last column heading of the sshare output from "FS Usage" to
297
"FairShare" and added more detail to the sshare man page.
298
-- Fix bug in enforcement of reservation by account name. Used wrong index
299
into an array. Patch from Gerrit Renker, CSCS.
300
-- Modify job_submit/lua plugin to treat any non-zero return code from the
301
job_submit and job_modify functions as an error and the user request should
303
-- Fix bug which would permit pending job to be started on completing node
304
when job preemption is configured.
306
* Changes in SLURM 2.2.0.rc2
307
============================
308
-- Fix memory leak in job step allocation logic. Patch from Hongjia Cao, NUDT.
309
-- If a preempted job was submitted with the --no-requeue option then cancel
310
rather than requeue it.
311
-- Fix for problems when adding a user for the first time to a new cluster
312
with a 2.1 sacctmgr without specifying a default account.
313
-- Resend TERMINATE_JOB message only to nodes that the job still has not
314
terminated on. Patch from Hongjia Cao, NUDT.
315
-- Treat time limit specification of "0:300" as a request for 300 seconds
316
(5 minutes) instead of one minute.
317
-- Modify sched/backfill plugin logic to continue working its way down the
318
queue of jobs rather than restarting at the top if there are no changes in
319
job, node, or partition state between runs. Patch from Hongjia Cao, NUDT.
320
-- Improve scalability of select/cons_res logic. Patch from Matthieu Hautreux,
322
-- Fix for possible deadlock in the slurmstepd when cancelling a job that is
323
also writing a large amount of data to stderr.
324
-- Fix in select/cons_res to eliminate "mem underflow" error when the
325
slurmctld is reconfigured while a job is in completing state.
326
-- Send a message to the a user's job when it's real or virual memory limit
328
-- Apply rlimits right before execing the users task so to lower the risk of
329
the task exiting because the slurmstepd ran over a limit (log file size,
331
-- Add scontrol command of "uhold <job_id>" so that an administrator can hold
332
a job and let the job's owner release it. The scontrol command of
333
"hold <job_id>" when executed by a SLURM administrator can only be released
334
by a SLURM administrator and not the job owner.
335
-- Change atoi to slurm_atoul in mysql plugin, needed for running on 32-bit
336
systems in some cases.
337
-- If a batch job is found to be missing from a node, make its termination
338
state be NODE_FAIL rather than CANCELLED.
339
-- Fatal error put back if running a bluegene or cray plugin from a controller
341
-- Make sure jobacct_gather plugin is not shutdown before messing with the
343
-- Modify signal handling in srun and salloc commands to avoid deadlock if the
344
malloc function is interupted and called again. The malloc function is
345
thread safe, but not reentrant, which is a problem when signal handling if
346
the malloc function itself has a lock. Problem fixed by moving signal
347
handling in those commands to a new pthread.
348
-- In srun set job abort flag on completion to handle the case when a user
349
cancels a job while the node is not responding but slurmctld has not yet
350
the node down. Patch from Hongjia Cao, NUDT.
351
-- Streamline the PMI logic if no duplicate keys are included in the key-pairs
352
managed. Substantially improves performance for large numbers of tasks.
353
Adds support for SLURM_PMI_KVS_NO_DUP_KEYS environment variable. Patch
354
from Hongjia Cao, NUDT.
355
-- Fix issues with sview dealing with older versions of sview and saving
357
-- Remove references to --mincores, --minsockets, and --minthreads from the
358
salloc, sbatch and srun man pages. These options are defunct, Patch from
360
-- Made openssl not be required to build RPMs, it is not required anymore
361
since munge is the default crypto plugin.
362
-- sacctmgr now has smarts to figure out if a qos is a default qos when
363
modifing a user/acct or removing a qos.
364
-- For reservations on BlueGene systems, set and report c-node counts rather
365
than midplane counts.
367
* Changes in SLURM 2.2.0.rc1
368
============================
369
-- Add show_flags parameter to the slurm_load_block_info() function.
370
-- perlapi has been brought up to speed courtesy of Hongjia Coa. (make sure to
371
run 'make clean' if building in a different dir than source)
372
-- Fixed regression in pre12 in crypto/munge when running with
373
--enable-multiple-slurmd which would cause the slurmd's to core.
374
-- Fixed regression where cpu count wasn't figured out correctly for steps.
375
-- Fixed issue when using old mysql that can't handle a '.' in the table
377
-- Mysql plugin works correctly without the SlurmDBD
378
-- Added ability to query batch step with sstat. Currently no accounting data
379
is stored for the batch step, but the internals are inplace if we decide to
380
do that in the future.
381
-- Fixed some backwards compatibility issues with 2.2 talking to 2.1.
382
-- Fixed regression where modifying associations didn't get sent to the
384
-- Made sshare sort things the same way saccmgr list assoc does
386
-- Fixed issue with default accounts being set up correctly.
387
-- Changed sortting in the slurmctld so sshare output is similar to that of
389
-- Modify reservation logic so that daily and weekly reservations maintain
390
the same time when daylight savings time starts or ends in the interim.
391
-- Edit to make reservations handle updates to associations.
392
-- Added the derived exit code to the slurmctld job record and the derived
393
exit code and string to the job record in the SLURM db.
394
-- Added slurm-sjobexit RPM for SLURM job exit code management tools.
395
-- Added ability to use sstat/sacct against the batch step.
396
-- Added OnlyDefaults option to sacctmgr list associations.
397
-- Modified the fairshare priority formula to F = 2**(-Ue/S)
398
-- Modify the PMI functions key-pair exchange function to support a 32-bit
399
counter for larger job sizes. Patch from Hongjia Cao, NUDT.
400
-- In sched/builtin - Make the estimated job start time logic faster (borrowed
401
new logic from sched/backfill and added pthread) and more accurate.
402
-- In select/cons_res fix bug that could result in a job being allocated zero
403
CPUs on some nodes. Patch from Hongjia Cao, NUDT.
404
-- Fix bug in sched/backfill that could set expected start time of a job too
406
-- Added ability to enforce new limits given to associations/qos on
408
-- Increase max message size for the slurmdbd from 1000000 to 16*1024*1024
409
-- Increase number of active threads in the slurmdbd from 50 to 100
410
-- Fixed small bug in src/common/slurmdb_defs.c reported by Bjorn-Helge Mevik
411
-- Fixed sacctmgr's ability to query associations against qos again.
412
-- Fixed sview show config on non-bluegene systems.
413
-- Fixed bug in selecting jobs based on sacct -N option
414
-- Fix bug that prevented job Epilog from running more than once on a node if
415
a job was requeued and started no job steps.
416
-- Fixed issue where node index wasn't stored correcting when using DBD.
417
-- Enable srun's use of the --nodes option with --exclusive (previously the
418
--nodes option was ignored).
419
-- Added UsageThreshold and Flags to the QOS object.
420
-- Patch to improve threadsafeness in the mysql plugins.
421
-- Add support for fair-share scheduling to be based upon resource use at
422
the level of bank accounts and ignore use of individual users. Patch by
423
Par Andersson, National Supercomputer Centre, Sweden.
425
* Changes in SLURM 2.2.0.pre12
426
==============================
427
-- Log if Prolog or Epilog run for longer than MessageTimeout / 2.
428
-- Log the RPC number associated with messages from slurmctld that timeout.
429
-- Fix bug in select/cons_res logic when job allocation includes --overcommit
430
and --ntasks-per-node options and the node has fewer CPUs than the count
431
specified by --ntasks-per-node.
432
-- Fix bug in gang scheduling and job preemption logic so that preempted jobs
433
get resumed properly after a slurmctld hot-start.
434
-- Fix bug in select/linear handling of gang scheduled jobs that could result
435
in run_job_cnt underflow error message.
436
-- Fix bug in gang scheduling logic to properly support partitions added
437
using the scontrol command.
438
-- Fix a segmentation fault in sview where the 'excluded_partitions' field
439
was set to NULL, caused by the absence of ~/.slurm/sviewrc.
440
-- Rewrote some calls to is_user_any_coord() in src/plugins/accounting_storage
441
modules to make use of is_user_any_coord()'s return value.
442
-- Add configure option of --with=dimensions=#.
443
-- Modify srun ping logic so that srun would only be considered not responsive
444
if three ping messages were not responded to. Patch from Hongjia Cao (NUDT).
445
-- Preserve a node's ReasonTime field after scontrol reconfig command. Patch
446
from Hongjia Cao (NUDT).
447
-- Added the authority for users with AdminLevel's defined in the SLURM db
448
(Operators and Admins) and account coordinators to invoke commands that
449
affect jobs, reservations, nodes, etc.
450
-- Fix for slurmd restart on completing node with no tasks to get the correct
451
state, completing. Patch from Hongjia Cao (NUDT).
452
-- Prevent scontrol setting a node's Reason="". Patch from Hongjia Cao (NUDT).
453
-- Add new functions hostlist_ranged_string_malloc,
454
hostlist_ranged_string_xmalloc, hostlist_deranged_string_malloc, and
455
hostlist_deranged_string_xmalloc which will allocate memory as needed.
456
-- Make the slurm commands support both the --cluster and --clusters option.
457
Previously, some commands support one of those options, but not the other.
458
-- Fix bug when resizing a job that has steps running on some of those nodes.
459
Avoid killing the job step on remaining nodes. Patch from Rod Schultz
460
(BULL). Also fix bug related to tracking the CPUs allocated to job steps
461
on each node after releasing some nodes from the job's allocation.
462
-- Applied patch from Rod Schultz / Matthieu Hautreux to keep the Node-to-Host
463
cache from becoming corrupted when a hostname cannot be resolved.
464
-- Export more symbols in libslurm for job and node state information
465
translation (numbers to strings). Patch from Hongia Cao, NUDT.
466
-- Add logic to retry sending RESPONSE_LAUNCH_TASKS messages from slurmd to
467
srun. Patch from Hongia Cao, NUDT.
468
-- Modify bit_unfmt_hexmask() and bit_unfmt_binmask() functions to clear the
469
bitmap input before setting the bits indicated in the input string.
470
-- Add SchedulerParameters option of bf_window to control how far into the
471
future that the backfill scheduler will look when considering jobs to start.
472
The default value is one day. See "man slurm.conf" for details.
473
-- Fix bug that can result in duplicate job termination records in accounting
474
for job termination when slurmctld restarts or reconfigures.
475
-- Modify plugin and library logic as needed to support use of the function
476
slurm_job_step_stat() from user commands.
477
-- Fix race condition in which PrologSlurmctld failure could cause slurmctld
479
-- Fix bug preventing users in secondary user groups from being granted access
480
to partitions configured with AllowGroups.
481
-- Added support for a default account and wckey per cluster within accounting.
482
-- Modified select/cons_res plugin so that if MaxMemPerCPU is configured and a
483
job specifies it's memory requirement, then more CPUs than requested will
484
automatically be allocated to a job to honor the MaxMemPerCPU parameter.
485
-- Added the derived_ec (exit_code) member to job_info_t. exit_code captures
486
the exit code of the job script (or salloc) while derived_ec contains the
487
highest exit code of all the job steps.
488
-- Added SLURM_JOB_EXIT_CODE and SLURM_JOB_DERIVED_EC variables to the
489
EpilogSlurmctld environment
490
-- More work done on the accounting_storage/pgsql plugin, still beta.
491
Patch from Hongjia Cao (NUDT).
492
-- Major updates to sview from Dan Rusak (Bull), including:
493
- Persistent option selections for each tab page
494
- Clean up topology in grids
495
- Leverage AllowGroups and Hidden options
496
- Cascade full-info popups for ease of selection
497
-- Add locks around the MySQL calls for proper operation if the non-thread
498
safe version of the MySQL library is used.
499
-- Remove libslurm.a, libpmi.a and libslurmdb.a from SLURM RPM. These static
500
libraries are not generally usable.
501
-- Fixed bug in sacctmgr when zeroing raw usage reported by Gerrit Renker.
503
* Changes in SLURM 2.2.0.pre11
504
==============================
505
-- Permit a regular user to change the partition of a pending job.
506
-- Major re-write of the job_submit/lua plugin to pass pointers to available
507
partitions and use lua metatables to reference the job and partition fields.
508
-- Add support for serveral new trigger types: SlurmDBD failure/restart,
509
Database failure/restart, Slurmctld failure/restart.
510
-- Add support for SLURM_CLUSTERS environment variable in the sbatch, sinfo,
512
-- Modify the sinfo and squeue commands to report state of multiple clusters
513
if the --clusters option is used.
514
-- Added printf __attribute__ qualifiers to info, debug, ... to help prevent
515
bad/incorrect parameters being sent to them. Original patch from
516
Eygene Ryabinkin (Russian Research Centre).
517
-- Fix bug in slurmctld job completion logic when nodes allocated to a
518
completing job are re-booted. Patch from Hongjia Cao (NUDT).
519
-- In slurmctld's node record data structure, rename "hilbert_integer" to
521
-- Add topology/node_rank plugin to sort nodes based upon rank loaded from
522
BASIL on Cray computers.
523
-- Fix memory leak in the auth/munge and crypto/munge plugins in the case of
526
* Changes in SLURM 2.2.0.pre10
527
==============================
528
-- Fix issue when EnforcePartLimits=yes in slurm.conf all jobs where no nodecnt
529
was specified the job would be seen to have maxnodes=0 which would not
531
-- Fix issue where if not suspending a job the gang scheduler does the correct
533
-- Fixed some issues when dealing with jobs from a 2.1 system so they live
535
-- In srun, log if --cpu_bind options are specified, but not supported by the
536
current system configuration.
537
-- Various Patchs from Hongjia Cao dealing with bugs found in sacctmgr and
539
-- Fix bug in changing the nodes allocated to a running job and some node
540
names specified are invalid, avoid invalid memory reference.
541
-- Fixed filename substitution of %h and %n based on patch from Ralph Bean
542
-- Added better job sorting logic when preempting jobs with qos.
543
-- Log the IP address and port number for some communication errors.
544
-- Fix bug in select/cons_res when --cpus_per_task option is used, could
545
oversubscribe resources.
546
-- In srun, do not implicitly set the job's maximum node count based upon a
548
-- Avoid running the HealthCheckProgram on non-responding nodes rather than
550
-- Fix bug in handling of poll() functions on OS X (SLURM was ignoring POLLIN
551
if POLLHUP flag was set at the same time).
552
-- Pulled Cray logic out of common/node_select.c into it's own
553
select/cray plugin cons_res is the default. To use linear add 'Linear' to
554
SelectTypeParameters.
555
-- Fixed bug where resizing jobs didn't correctly set used limits correctly.
556
-- Change sched/backfill default time interval to 30 seconds and defer attempt
557
to backfill schedule if slurmctld has more than 5 active RPCs. General
558
improvements in logic scalability.
559
-- Add SchedulerParameters option of default_sched_depth=# to control how
560
many jobs on queue should be tested for attempted scheduling when a job
561
completes or other routine events. Default value is 100 jobs. The full job
562
queue is tested on a less frequent basis. This option can dramatically
563
improve performance on systems with thousands of queued jobs.
564
-- Gres/gpu now sets the CUDA_VISIBLE_DEVICES environment to control which
565
GPU devices should be used for each job or job step and CUDA version 3.1+
566
is used. NOTE: SLURM's generic resource support is still under development.
567
-- Modify select/cons_res to pack jobs onto allocated nodes differently and
568
minimize system fragmentation. For example on nodes with 8 CPUs each, a
569
job needing 10 CPUs will now ideally be allocated 8 CPUs on one node and
570
2 CPUs on another node. Previously the job would have ideally been
571
allocated 5 CPUs on each node, fragmenting the unused resources more.
572
-- Modified the behavior of update_job() in job_mgr.c to return when the first
573
error is encountered instead of continuing with more job updates.
574
-- Removed all references to the following slurm.conf parameters, all of which
575
have been removed or replaced since version 2.0 or earlier: HashBase,
576
HeartbeatInterval, JobAcctFrequency, JobAcctLogFile (instead use
577
AccountingStorageLoc), JobAcctType, KillTree, MaxMemPerTask, and
578
MpichGmDirectSupport.
579
-- Fix bug in slurmctld restart logic that improperly reported jobs had
580
invalid features: "Job 65537 has invalid feature list: fat".
581
-- BLUEGENE - Removed thread pool for destroying blocks. It turns out the
582
memory leak we were concerned about for creating and destroying threads
583
in a plugin doesn't exist anymore. This increases throughput dramatically,
584
allowing multiple jobs to start at the same time.
585
-- BLUEGENE - Removed thread pool for starting and stopping jobs. For similar
586
reasons as noted above.
587
-- BLUEGENE - Handle blocks that never deallocate.
589
* Changes in SLURM 2.2.0.pre9
590
=============================
591
-- sbatch can now submit jobs to multiple clusters and run on the earliest
593
-- Fix bug introduced in pre8 that prevented job dependencies and job
594
triggers from working without the --enable-debug configure option.
595
-- Replaced slurm_addr with slurm_addr_t
596
-- Replaced slurm_fd with slurm_fd_t
597
-- Skeleton code added for BlueGeneQ.
598
-- Jobs can now be submitted to multiple partitions (job queues) and use the
599
one permitting earliest start time.
600
-- Change slurmdb_coord_table back to acct_coord_table to keep consistant
602
-- Introduced locking system similar to that in the slurmctld for the
604
-- Added ability to change a users name in accounting.
605
-- Restore squeue support for "%G" format (group id) accidentally removed in
607
-- Added preempt_mode option to QOS.
608
-- Added a grouping=individual for sreport size reports.
609
-- Added remove_qos logic to jobs running under a QOS that was removed.
610
-- scancel now exits with a 1 if any job is non-existant when canceling.
611
-- Better handling of select plugins that don't exist on various systems for
612
cross cluster communication. Slurmctld, slurmd, and slurmstepd now only
613
load the default select plugin as well.
614
-- Better error handling when loading plugins.
615
-- Prevent scontrol from aborting if getlogin() returns NULL.
616
-- Prevent scontrol segfault when there are hidden nodes.
617
-- Prevent srun segfault after task launch failure.
618
-- Added job_submit/lua plugin.
619
-- Fixed sinfo on a bluegene system to print correctly the output for:
620
sinfo -e -o "%9P %6m %.4c %.22F %f"
621
-- Add scontrol commands "hold" and "release" to simplify setting a job's
622
priority to 0 or 1. Also tests that the job is in pending state.
623
-- Increase maximum node list size (for incoming RPC) from 1024 bytes to 64k.
624
-- In the backup slurmctld, purge triggers before recovering trigger state to
625
avoid duplicate entries.
626
-- Fix bug in sacct processing of --fields= option.
627
-- Fix bug in checkpoint/blcr for jobs spanning multiple nodes introduced when
628
changing some variable names in version 2.2.0.pre5.
629
-- Removed the vestigal set_max_cluster_usage() function from the Priority
631
-- Modify the output of "scontrol show job" for the field ReqS:C:T=. Fields
632
not specified by the user will be reported as "*" instead of 65534.
633
-- Added DefaultQOS option for an association.
634
-- BLUEGENE - Added -B option to the slurmctld to clear created blocks from
636
-- BLUEGENE - Added option to scontrol & sview to recreate existing blocks.
637
-- Fixed flags for returning messages to use the correct munge key when going
639
-- BLUEGENE - Added option to scontrol & sview to resume blocks in an error
640
state instead of just freeing them.
641
-- sview patched to allow multiple row selection of jobs, patch from Dan Rusak
642
-- Lower default slurmctld server thread count from 1024 to 256. Some systems
643
process threads on a last-in first-out basis and the high thread count was
644
causing unexpectedly high delays for some RPCs.
645
-- Added to sacctmgr the ability for admins to reset the raw usage of a user
647
-- Improved the efficiency of a few lines in sacctmgr
649
* Changes in SLURM 2.2.0.pre8
650
=============================
651
-- Add DebugFlags parameter of "Backfill" for sched/backfill detailed logging.
652
-- Add DebugFlags parameter of "Gang" for detailed logging of gang scheduling
654
-- Add DebugFlags parameter of "Priority" for detailed logging of priority
655
multifactor activities.
656
-- Add DebugFlags parameter of "Reservation" for detailed logging of advanced
658
-- Add run time to mail message upon job termination and queue time for mail
659
message upon job begin.
660
-- Add email notification option for job requeue.
661
-- Generate a fatal error if the srun --relative option is used when not
662
within an existing job allocation.
663
-- Modify the meaning of InactiveLimit slightly. It will now cancel the job
664
allocation created using the salloc or srun command if those commands
665
cease responding for the InactiveLimit regardless of any running job steps.
666
This parameter will no longer effect jobs spawned using sbatch.
667
-- Remove AccountingStoragePass and JobCompPass from configuration RPC and
668
scontrol show config command output. The use of SlurmDBD is still strongly
669
recommended as SLURM will have limited database functionality or protection
671
-- Add sbatch options of --export and SBATCH_EXPORT to control which
672
environment variables (if any) get propagated to the spawned job. This is
673
particularly important for jobs that are submitted on one cluster and run
674
on a different cluster.
675
-- Fix bug in select/linear when used with gang scheduling and there are
676
preempted jobs at the time slurmctld restarts that can result in over-
677
subscribing resources.
678
-- Added keeping track of the qos a job is running with in accounting.
679
-- Fix for handling correctly jobs that resize, and also reporting correct
680
stats on a job after it finishes.
681
-- Modify gang scheduler so with SelectTypeParameter=CR_CPUS and task
682
affinity is enabled, keep track of the individual CPUs allocated to jobs
683
rather than just the count of CPUs allocated (which could overcommit
684
specific CPUs for running jobs).
685
-- Modify select/linear plugin data structures to eliminate underflow errors
686
for the exclusive_cnt and tot_job_cnt variables (previously happened when
687
slurmctld reconfigured while the job was in completing state).
688
-- Change slurmd's working directory (and location of core files) to match
689
that of the slurmctld daemon: the same directory used for log files,
690
SlurmdLogFile (if specified with an absolute pathname) otherwise the
691
directory used to save state, SlurmdSpoolDir.
692
-- Add sattach support for the --pty option.
693
-- Modify slurmctld communications logic to accept incoming messages on more
694
than one port for improved scalability.
695
-- Add SchedulerParameters option of "defer" to avoid trying to schedule a
696
job at submission time, but to attempt scheduling many jobs at once for
697
improved performance under heavy load.
698
-- Correct logic controlling slurmctld thread limit eliminating check of
700
-- Make slurmctld's trigger logic more robust in the event that job records
701
get purged before their trigger can be processed (e.g. MinJobAge=1).
702
-- Add support for users to hold/release their own jobs (submit the job with
703
srun/sbatch --hold/-H option or use "scontrol update jobid=# priority=0"
704
to hold and "scontrol update jobid=# priority=1" to release).
705
-- Added ability for sacct to query jobs by qos and a range of timelimits.
706
-- Added ability for sstat to query pids of steps running.
707
-- Support time specification in UTS format with a prefix of "uts" (e.g.
708
"sbatch --begin=uts458389988 my.script").
710
* Changes in SLURM 2.2.0.pre7
711
=============================
712
-- Fixed issue with sacctmgr if querying against non-existent cluster it
713
works the same way as 2.1.
714
-- Added infrastructure to support allocation of generic node resources (gres).
715
-Modified select/linear and select/cons_res plugins to allocate resources
716
at the level of a job without oversubcription.
717
-Get sched/backfill operating with gres allocations.
718
-Get gres configuration changes (reconfiguration) working.
719
-Have job steps allocate resources.
720
-Modified job step credential to include the job's and step's gres
722
-Integrate with HWLOC library to identify GPUs and NICs configured on each
724
-- SLURM commands (squeue, sinfo, etc...) can now go cross-cluster on like
725
linux systems. Cross-cluster for bluegene to linux and such should
726
work fine, even sview.
727
-- Added the ability to configure PreemptMode on a per-partition basis.
728
-- Change slurmctld's default thread limit count to 1024, but adjust that down
729
as needed based upon the process's resource limits.
730
-- Removed the non-functional "SystemCPU" and "TotalCPU" reporting fields from
731
sstat and updated man page
732
-- Correct location of apbasil command on Cray XT systems.
733
-- Fixed bug in MinCPU and AveCPU calculations in sstat command
734
-- Send message to srun when the Prolog takes too long (MessageTimeout) to
736
-- Change timeout for socket connect() to be half of configured MessageTimeout.
737
-- Added high-throughput computing web page with configuration guidance.
738
-- Use more srun sockets to process incoming PMI (MPICH2) connections for
740
-- Added DebugFlags for the select/bluegene plugin: DEBUG_FLAG_BG_PICK,
741
DEBUG_FLAG_BG_WIRES, DEBUG_FLAG_BG_ALGO, and DEBUG_FLAG_BG_ALGO_DEEP.
742
-- Remove vestigial job record field "kill_on_step_done" (internal to the
743
slurmctld daemon only).
744
-- For MPICH2 jobs: Clear PMI state between job steps.
746
* Changes in SLURM 2.2.0.pre6
747
=============================
748
-- sview - added ability to see database configuration.
749
-- sview - added ability to add/remove visible tabs.
750
-- sview - change way grid highlighting takes place on selected objects.
751
-- Added infrastructure to support allocation of generic node resources.
752
-Added node configuration parameter of Gres=.
753
-Added ability to view/modify a node's gres using scontrol, sinfo and sview.
754
-Added salloc, sbatch and srun --gres option.
755
-Added ability to view a job or job step's gres using scontrol, squeue and
757
-Added new configuration parameter GresPlugins to define plugins used to
758
manage generic resources.
759
-Added framework for gres plugins.
760
-Added DebugFlags option of "gres" for detailed debugging of gres actions.
761
-- Slurmd modified to log slow slurmstepd startup and note possible file system
763
-- sview - There is now a .slurm/sviewrc created when running sview.
764
Defaults are put in there as to how sview looks when first launched.
765
You can set these by Ctrl-S or Options->Set Default Settings.
766
-- Add scontrol "wait_job <job_id>" option to wait for nodes to boot as needed.
767
Useful for batch jobs (in Prolog, PrologSlurmctld or the script) if powering
769
-- Added salloc and sbatch option --wait-all-nodes. If set non-zero, job
770
initiation will be delayed until all allocated nodes have booted. Salloc
771
will log the delay with the messages "Waiting for nodes to boot" and "Nodes
773
-- The Priority/mulitfactor plugin now takes into consideration size of job
774
in cpus as well as size in nodes when looking at the job size factor.
775
Previously only nodes were considered.
776
-- When using the SlurmDBD messages waiting to be sent will be combined
777
and sent in one message.
778
-- Remove srun's --core option. Move the logic to an optional SPANK plugin
779
(currently in the contribs directory, but plan to distribute through
780
http://code.google.com/p/slurm-spank-plugins/).
781
-- Patch for adding CR_CORE_DEFAULT_DIST_BLOCK as a select option to layout
782
jobs using block layout across cores within each node instead of cyclic
783
which was previously the default.
784
-- Accounting - When removing associations if jobs are running, those jobs
785
must be killed before proceeding. Before the jobs were killed
786
automatically thus causing user confusion on what is most likely an
788
-- sview - color column keeps reference color when highlighting.
789
-- Configuration parameter MaxJobCount changed from 16-bit to 32-bit field.
790
The default MaxJobCount was changed from 5,000 to 10,000.
791
-- SLURM commands (squeue, sinfo, etc...) can now go cross-cluster on like
792
linux systems. Cross-cluster for bluegene to linux and such does not
793
currently work. You can submit jobs with sbatch. Salloc and srun are not
794
cross-cluster compatible, and given their nature to talk to actual compute
795
nodes these will likely never be.
796
-- salloc modified to forward SIGTERM to the spawned program.
797
-- In sched/wiki2 (for Moab support) - Add GRES and WCKEY fields to MODIFYJOBS
798
and GETJOBS commands. Add GRES field to GETNODES command.
799
-- In struct job_descriptor and struct job_info: rename min_sockets to
800
sockets_per_node, min_cores to cores_per_socket, and min_threads to
801
threads_per_core (the values are not minimum, but represent the target
803
-- Fixed bug in clearing a partition's DisableRootJobs value reported by
805
-- Purge (or ignore) terminated jobs in a more timely fashion based upon the
806
MinJobAge configuration parameter. Small values for MinJobAge should improve
807
responsiveness for high job throughput.
809
* Changes in SLURM 2.2.0.pre5
810
=============================
811
-- Modify commands to accept time format with one or two digit hour value
812
(e.g. 8:00 or 08:00 or 8:00:00 or 08:00:00).
813
-- Modify time parsing logic to accept "minute", "hour", "day", and "week" in
814
addition to the currently accepted "minutes", "hours", etc.
815
-- Add slurmd option of "-C" to print actual hardware configuration and exit.
816
-- Pass EnforcePartLimits configuration parameter from slurmctld for user
817
commands to see the correct value instead of always "NO".
818
-- Modify partition data structures to replace the default_part,
819
disable_root_jobs, hidden and root_only fields with a single field called
820
"flags" populated with the flags PART_FLAG_DEFAULT, PART_FLAG_NO_ROOT
821
PART_FLAG_HIDDEN and/or PART_FLAG_ROOT_ONLY. This is a more flexible
822
solution besides making for smaller data structures.
823
-- Add node state flag of JOB_RESIZING. This will only exist when a job's
824
accounting record is being written immediately before or after it changes
825
size. This permits job accounting records to be written for a job at each
827
-- Make calls to jobcomp and accounting_storage plugins before and after a job
828
changes size (with the job state being JOB_RESIZING). All plugins write a
829
record for the job at each size with intermediate job states being
831
-- When changing a job size using scontrol, generate a script that can be
832
executed by the user to reset SLURM environment variables.
833
-- Modify select/linear and select/cons_res to use resources released by job
835
-- Added to contribs foundation for Perl extension for slurmdb library.
836
-- Add new configuration parameter JobSubmitPlugins which provides a mechanism
837
to set default job parameters or perform other site-configurable actions at
839
-- Better postgres support for accounting, still beta.
840
-- Speed up job start when using the slurmdbd.
841
-- Forward step failure reason back to slurmd before in some cases it would
842
just be SLURM_FAILURE returned.
843
-- Changed squeue to fail when passed invalid -o <output_format> or
844
-S <sort_list> specifications.
846
* Changes in SLURM 2.2.0.pre4
847
=============================
848
-- Add support for a PropagatePrioProcess configuration parameter value of 2
849
to restrict spawned task nice values to that of the slurmd daemon plus 1.
850
This insures that the slurmd daemon always have a higher scheduling
851
priority than spawned tasks.
852
-- Add support in slurmctld, slurmd and slurmdbd for option of "-n <value>" to
853
reset the daemon's nice value.
854
-- Fixed slurm_load_slurmd_status and slurm_pid2jobid to work correctly when
855
multiple slurmds are in use.
856
-- Altered srun to set max_nodes to min_nodes if not set when doing an
857
allocation to mimic that which salloc and sbatch do. If running a step if
858
the max isn't set it remains unset.
859
-- Applied patch from David Egolf (David.Egolf@Bull.com). Added the ability
860
to purge/archive accounting data on a day or hour basis, previously
861
it was only available on a monthly basis.
862
-- Add support for maximum node count in job step request.
863
-- Fix bug in CPU count logic for job step allocation (used count of CPUS per
864
node rather than CPUs allocated to the job).
865
-- Add new configuration parameters GroupUpdateForce and GroupUpdateTime.
866
See "man slurm.conf" for details about how these control when slurmctld
867
updates its information of which users are in the groups allowed to use
869
-- Added sacctmgr list events which will list events that have happened on
870
clusters in accounting.
871
-- Permit a running job to shrink in size using a command of
872
"scontrol update JobId=# NumNodes=#" or
873
"scontrol update JobId=# NodeList=<names>". Subsequent job steps must
874
explicitly specify an appropriate node count to work properly.
875
-- Added resize_time field to job record noting the time of the latest job
876
size change (to be used for accounting purposes).
877
-- sview/smap now hides hidden partitions and their jobs by default, with an
878
option to display them.
880
* Changes in SLURM 2.2.0.pre3
881
=============================
882
-- Refine support for TotalView partial attach. Add parameter to configure
883
program of "--enable-partial-attach".
884
-- In select/cons_res, the count of CPUs on required nodes was formerly
885
ignored in enforcing the maximum CPU limit. Also enforce maximum CPU
886
limit when the topology/tree plugin is configured (previously ignored).
887
-- In select/cons_res, allocate cores for a job using a best-fit approach.
888
-- In select/cons_res, for jobs that can run on a single node, use a best-fit
890
-- Add support for new partition states of DRAIN and INACTIVE and new partition
891
option of "Alternate" (alternate partition to use for jobs submitted to
892
partitions that are currently in a state of DRAIN or INACTIVE).
893
-- Add group membership cache. This can substantially speed up slurmctld
894
startup or reconfiguration if many partitions have AllowGroups configured.
895
-- Added slurmdb api for accessing slurm DB information.
896
-- In select/linear: Modify data structures for better performance and to
897
avoid underflow error messages when slurmctld restarts while jobs are
899
-- Added hash for slurm.conf so when nodes check in to the controller it can
900
verify the slurm.conf is the same as the one it is running. If not an
901
error message is displayed. To silence this message add NO_CONF_HASH
902
to DebugFlags in your slurm.conf.
903
-- Added error code ESLURM_CIRCULAR_DEPENDENCY and prevent circular job
904
dependencies (e.g. job 12 dependent upon job 11 AND job 11 is dependent
906
-- Add BootTime and SlurmdStartTime to available node information.
907
-- Fixed moab_2_slurmdb to work correctly under new database schema.
908
-- Slurmd will drain a compute node when the SlurmdSpoolDir is full.
910
* Changes in SLURM 2.2.0.pre2
911
=============================
912
-- Add support for spank_get_item() to get S_STEP_ALLOC_CORES and
913
S_STEP_ALLOC_MEM. Support will remain for S_JOB_ALLOC_CORES and
915
-- Kill individual job steps that exceed their memory limit rather than
916
killing an entire job if one step exceeds its memory limit.
917
-- Added configuration parameter VSizeFactor to enforce virtual memory limits
918
for jobs and job steps as a percentage of their real memory allocation.
919
-- Add scontrol ability to update job step's time limits.
920
-- Add scontrol ability to update job's NumCPUs count.
921
-- Add --time-min options to salloc, sbatch and srun. The scontrol command
922
has been modified to display and modify the new field. sched/backfill
923
plugin has been changed to alter time limits of jobs with the
924
--time-min option if doing so permits earlier job initiation.
925
-- Add support for TotalView symbol MPIR_partial_attach_ok with srun support
926
to release processes which TotalView does not attach to.
927
-- Add new option for SelectTypeParameters of CR_ONE_TASK_PER_CORE. This
928
option will allocate one task per core by default. Without this option,
929
by default one task will be allocated per thread on nodes with more than
930
one ThreadsPerCore configured.
931
-- Avoid accounting separately for a current pid corresponds to a Light Weight
932
Process (Thread POSIX) appearing in the /proc directory. Only account for
933
the original process (pid==tgid) to avoid accounting for memory use more
935
-- Add proctrack/cgroup plugin which uses Linux control groups (aka cgroup)
936
to track processes on Linux systems having this feature enabled (kernel
938
-- Add logging of license transations including job_id.
939
-- Add configuration parameters SlurmSchedLogFile and SlurmSchedLogLevel to
940
support writing scheduling events to a separate log file.
941
-- Added contribs/web_apps/chart_stats.cgi, a web app that invokes sreport to
942
retrieve from the accounting storage db a user's request for job usage or
943
machine utilization statistics and charts the results to a browser.
944
-- Massive change to the schema in the storage_accounting/mysql plugin. When
945
starting the slurmdbd the process of conversion may take a few minutes.
946
You might also see some errors such as 'error: mysql_query failed: 1206
947
The total number of locks exceeds the lock table size'. If you get this,
948
do not worry, it is because your setting of innodb_buffer_pool_size in
949
your my.cnf file is not set or set too low. A decent value there should
950
be 64M or higher depending on the system you are running on. See
951
RELEASE_NOTES for more information. But setting this and then
952
restarting the mysqld and slurmdbd will put things right. After this
953
change we have noticed 50-75% increase in performance with sreport and
955
-- Fix for MaxCPUs to honor partitions of 1 node that have more than the
957
-- Add support for "scontrol notify <message>" to work for batch jobs.
959
* Changes in SLURM 2.2.0.pre1
960
=============================
961
-- Added RunTime field to scontrol show job report
962
-- Added SLURM_VERSION_NUMBER and removed SLURM_API_VERSION from
964
-- Added support to handle communication with SLURM 2.1 clusters. Job's
965
should not be lost in the future when upgrading to higher versions of
967
-- Added withdeleted options for listing clusters, users, and accounts
968
-- Remove PLPA task affinity functions due to that package being deprecated.
969
-- Preserve current partition state information and node Feature and Weight
970
information rather than use contents of slurm.conf file after slurmctld
971
restart with -R option or SIGHUP. Replace information with contents of
972
slurm.conf after slurmctld restart without -R or "scontrol reconfigure".
973
See RELEASE_NOTES file fore more details.
974
-- Modify SLURM's PMI library (for MPICH2) to properly execute an executable
975
program stand-alone (single MPI task launched without srun).
976
-- Made GrpCPUs and MaxCPUs limits work for select/cons_res.
977
-- Moved all SQL dependant plugins into a seperate rpm slurm-sql. This
978
should be needed only where a connection to a database is needed (i.e.
979
where the slurmdbd is running)
980
-- Add command line option "no_sys_info" to PAM module to supress system
981
logging of "access granted for user ...", access denied and other errors
982
will still be logged.
983
-- sinfo -R now has the user and timestamp in separate fields from the reason.
984
-- Much functionality has been added to account_storage/pgsql. The plugin
985
is still in a very beta state. It is still highly advised to use the
986
mysql plugin, but if you feel like living on the edge or just really
987
like postgres over mysql for some reason here you go. (Work done
988
primarily by Hongjia Cao, NUDT.)
990
* Changes in SLURM 2.1.17
991
=========================
992
-- Correct format of --begin reported in salloc, sbatch and srun --help
994
-- Correct logic for regular users to increase nice value of their own jobs.
4
996
* Changes in SLURM 2.1.16
5
997
=========================
6
998
-- Fixed minor warnings from gcc-4.5