1
"Previous Section"_Section_packages.html - "LAMMPS WWW Site"_lws -
2
"LAMMPS Documentation"_ld - "LAMMPS Commands"_lc :c
4
:link(lws,http://lammps.sandia.gov)
6
:link(lc,Section_commands.html#comm)
10
"Return to Section accelerate overview"_Section_accelerate.html
12
5.3.5 USER-OMP package :h4
14
The USER-OMP package was developed by Axel Kohlmeyer at Temple
15
University. It provides multi-threaded versions of most pair styles,
16
nearly all bonded styles (bond, angle, dihedral, improper), several
17
Kspace styles, and a few fix styles. The package currently
18
uses the OpenMP interface for multi-threading.
20
Here is a quick overview of how to use the USER-OMP package:
22
use the -fopenmp flag for compiling and linking in your Makefile.machine
23
include the USER-OMP package and build LAMMPS
24
use the mpirun command to set the number of MPI tasks/node
25
specify how many threads per MPI task to use
26
use USER-OMP styles in your input script :ul
28
The latter two steps can be done using the "-pk omp" and "-sf omp"
29
"command-line switches"_Section_start.html#start_7 respectively. Or
30
the effect of the "-pk" or "-sf" switches can be duplicated by adding
31
the "package omp"_package.html or "suffix omp"_suffix.html commands
32
respectively to your input script.
34
[Required hardware/software:]
36
Your compiler must support the OpenMP interface. You should have one
37
or more multi-core CPUs so that multiple threads can be launched by an
38
MPI task running on a CPU.
40
[Building LAMMPS with the USER-OMP package:]
42
To do this in one line, use the src/Make.py script, described in
43
"Section 2.4"_Section_start.html#start_4 of the manual. Type "Make.py
44
-h" for help. If run from the src directory, this command will create
45
src/lmp_omp using src/MAKE/Makefile.mpi as the starting
48
Make.py -p omp -o omp file mpi :pre
50
Or you can follow these steps:
56
The CCFLAGS setting in Makefile.machine needs "-fopenmp" to add OpenMP
57
support. This works for both the GNU and Intel compilers. Without
58
this flag the USER-OMP styles will still be compiled and work, but
59
will not support multi-threading. For the Intel compilers the CCFLAGS
60
setting also needs to include "-restrict".
62
[Run with the USER-OMP package from the command line:]
64
The mpirun or mpiexec command sets the total number of MPI tasks used
65
by LAMMPS (one or multiple per compute node) and the number of MPI
66
tasks used per node. E.g. the mpirun command in MPICH does this via
67
its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
69
You need to choose how many threads per MPI task will be used by the
70
USER-OMP package. Note that the product of MPI tasks * threads/task
71
should not exceed the physical number of cores (on a node), otherwise
72
performance will suffer.
74
Use the "-sf omp" "command-line switch"_Section_start.html#start_7,
75
which will automatically append "omp" to styles that support it. Use
76
the "-pk omp Nt" "command-line switch"_Section_start.html#start_7, to
77
set Nt = # of OpenMP threads per MPI task to use.
79
lmp_machine -sf omp -pk omp 16 -in in.script # 1 MPI task on a 16-core node
80
mpirun -np 4 lmp_machine -sf omp -pk omp 4 -in in.script # 4 MPI tasks each with 4 threads on a single 16-core node
81
mpirun -np 32 -ppn 4 lmp_machine -sf omp -pk omp 4 -in in.script # ditto on 8 16-core nodes :pre
83
Note that if the "-sf omp" switch is used, it also issues a default
84
"package omp 0"_package.html command, which sets the number of threads
85
per MPI task via the OMP_NUM_THREADS environment variable.
87
Using the "-pk" switch explicitly allows for direct setting of the
88
number of threads and additional options. Its syntax is the same as
89
the "package omp" command. See the "package"_package.html command doc
90
page for details, including the default values used for all its
91
options if it is not specified, and how to set the number of threads
92
via the OMP_NUM_THREADS environment variable if desired.
94
[Or run with the USER-OMP package by editing an input script:]
96
The discussion above for the mpirun/mpiexec command, MPI tasks/node,
97
and threads/MPI task is the same.
99
Use the "suffix omp"_suffix.html command, or you can explicitly add an
100
"omp" suffix to individual styles in your input script, e.g.
102
pair_style lj/cut/omp 2.5 :pre
104
You must also use the "package omp"_package.html command to enable the
105
USER-OMP package, unless the "-sf omp" or "-pk omp" "command-line
106
switches"_Section_start.html#start_7 were used. It specifies how many
107
threads per MPI task to use, as well as other options. Its doc page
108
explains how to set the number of threads via an environment variable
111
[Speed-ups to expect:]
113
Depending on which styles are accelerated, you should look for a
114
reduction in the "Pair time", "Bond time", "KSpace time", and "Loop
115
time" values printed at the end of a run.
117
You may see a small performance advantage (5 to 20%) when running a
118
USER-OMP style (in serial or parallel) with a single thread per MPI
119
task, versus running standard LAMMPS with its standard
120
(un-accelerated) styles (in serial or all-MPI parallelization with 1
121
task/core). This is because many of the USER-OMP styles contain
122
similar optimizations to those used in the OPT package, as described
125
With multiple threads/task, the optimal choice of MPI tasks/node and
126
OpenMP threads/task can vary a lot and should always be tested via
127
benchmark runs for a specific simulation running on a specific
128
machine, paying attention to guidelines discussed in the next
131
A description of the multi-threading strategy used in the USER-OMP
132
package and some performance examples are "presented
133
here"_http://sites.google.com/site/akohlmey/software/lammps-icms/lammps-icms-tms2011-talk.pdf?attredirects=0&d=1
135
[Guidelines for best performance:]
137
For many problems on current generation CPUs, running the USER-OMP
138
package with a single thread/task is faster than running with multiple
139
threads/task. This is because the MPI parallelization in LAMMPS is
140
often more efficient than multi-threading as implemented in the
141
USER-OMP package. The parallel efficiency (in a threaded sense) also
142
varies for different USER-OMP styles.
144
Using multiple threads/task can be more effective under the following
147
Individual compute nodes have a significant number of CPU cores but
148
the CPU itself has limited memory bandwidth, e.g. for Intel Xeon 53xx
149
(Clovertown) and 54xx (Harpertown) quad core processors. Running one
150
MPI task per CPU core will result in significant performance
151
degradation, so that running with 4 or even only 2 MPI tasks per node
152
is faster. Running in hybrid MPI+OpenMP mode will reduce the
153
inter-node communication bandwidth contention in the same way, but
154
offers an additional speedup by utilizing the otherwise idle CPU
157
The interconnect used for MPI communication does not provide
158
sufficient bandwidth for a large number of MPI tasks per node. For
159
example, this applies to running over gigabit ethernet or on Cray XT4
160
or XT5 series supercomputers. As in the aforementioned case, this
161
effect worsens when using an increasing number of nodes. :l
163
The system has a spatially inhomogeneous particle density which does
164
not map well to the "domain decomposition scheme"_processors.html or
165
"load-balancing"_balance.html options that LAMMPS provides. This is
166
because multi-threading achives parallelism over the number of
167
particles, not via their distribution in space. :l
169
A machine is being used in "capability mode", i.e. near the point
170
where MPI parallelism is maxed out. For example, this can happen when
171
using the "PPPM solver"_kspace_style.html for long-range
172
electrostatics on large numbers of nodes. The scaling of the KSpace
173
calculation (see the "kspace_style"_kspace_style.html command) becomes
174
the performance-limiting factor. Using multi-threading allows less
175
MPI tasks to be invoked and can speed-up the long-range solver, while
176
increasing overall performance by parallelizing the pairwise and
177
bonded calculations via OpenMP. Likewise additional speedup can be
178
sometimes be achived by increasing the length of the Coulombic cutoff
179
and thus reducing the work done by the long-range solver. Using the
180
"run_style verlet/split"_run_style.html command, which is compatible
181
with the USER-OMP package, is an alternative way to reduce the number
182
of MPI tasks assigned to the KSpace calculation. :l,ule
184
Additional performance tips are as follows:
186
The best parallel efficiency from {omp} styles is typically achieved
187
when there is at least one MPI task per physical processor,
188
i.e. socket or die. :ulb,l
190
It is usually most efficient to restrict threading to a single
191
socket, i.e. use one or more MPI task per socket. :l
193
Several current MPI implementation by default use a processor affinity
194
setting that restricts each MPI task to a single CPU core. Using
195
multi-threading in this mode will force the threads to share that core
196
and thus is likely to be counterproductive. Instead, binding MPI
197
tasks to a (multi-core) socket, should solve this issue. :l,ule