2
<CENTER><A HREF = "Section_packages.html">Previous Section</A> - <A HREF = "http://lammps.sandia.gov">LAMMPS WWW Site</A> -
3
<A HREF = "Manual.html">LAMMPS Documentation</A> - <A HREF = "Section_commands.html#comm">LAMMPS Commands</A>
13
<P><A HREF = "Section_accelerate.html">Return to Section accelerate overview</A>
15
<H4>5.3.3 USER-INTEL package
17
<P>The USER-INTEL package was developed by Mike Brown at Intel
18
Corporation. It provides a capability to accelerate simulations by
19
offloading neighbor list and non-bonded force calculations to Intel(R)
20
Xeon Phi(TM) coprocessors (not native mode like the KOKKOS package).
21
Additionally, it supports running simulations in single, mixed, or
22
double precision with vectorization, even if a coprocessor is not
23
present, i.e. on an Intel(R) CPU. The same C++ code is used for both
24
cases. When offloading to a coprocessor, the routine is run twice,
25
once with an offload flag.
27
<P>The USER-INTEL package can be used in tandem with the USER-OMP
28
package. This is useful when offloading pair style computations to
29
coprocessors, so that other styles not supported by the USER-INTEL
30
package, e.g. bond, angle, dihedral, improper, and long-range
31
electrostatics, can run simultaneously in threaded mode on the CPU
32
cores. Since less MPI tasks than CPU cores will typically be invoked
33
when running with coprocessors, this enables the extra CPU cores to be
34
used for useful computation.
36
<P>If LAMMPS is built with both the USER-INTEL and USER-OMP packages
37
intsalled, this mode of operation is made easier to use, because the
38
"-suffix intel" <A HREF = "Section_start.html#start_7">command-line switch</A> or
39
the <A HREF = "suffix.html">suffix intel</A> command will both set a second-choice
40
suffix to "omp" so that styles from the USER-OMP package will be used
41
if available, after first testing if a style from the USER-INTEL
44
<P>When using the USER-INTEL package, you must choose at build time
45
whether you are building for CPU-only acceleration or for using the
46
Xeon Phi in offload mode.
48
<P>Here is a quick overview of how to use the USER-INTEL package
49
for CPU-only acceleration:
51
<UL><LI>specify these CCFLAGS in your src/MAKE/Makefile.machine: -openmp, -DLAMMPS_MEMALIGN=64, -restrict, -xHost
52
<LI>specify -openmp with LINKFLAGS in your Makefile.machine
53
<LI>include the USER-INTEL package and (optionally) USER-OMP package and build LAMMPS
54
<LI>specify how many OpenMP threads per MPI task to use
55
<LI>use USER-INTEL and (optionally) USER-OMP styles in your input script
57
<P>Note that many of these settings can only be used with the Intel
58
compiler, as discussed below.
60
<P>Using the USER-INTEL package to offload work to the Intel(R)
61
Xeon Phi(TM) coprocessor is the same except for these additional
64
<UL><LI>add the flag -DLMP_INTEL_OFFLOAD to CCFLAGS in your Makefile.machine
65
<LI>add the flag -offload to LINKFLAGS in your Makefile.machine
67
<P>The latter two steps in the first case and the last step in the
68
coprocessor case can be done using the "-pk intel" and "-sf intel"
69
<A HREF = "Section_start.html#start_7">command-line switches</A> respectively. Or
70
the effect of the "-pk" or "-sf" switches can be duplicated by adding
71
the <A HREF = "package.html">package intel</A> or <A HREF = "suffix.html">suffix intel</A>
72
commands respectively to your input script.
74
<P><B>Required hardware/software:</B>
76
<P>To use the offload option, you must have one or more Intel(R) Xeon
77
Phi(TM) coprocessors and use an Intel(R) C++ compiler.
79
<P>Optimizations for vectorization have only been tested with the
80
Intel(R) compiler. Use of other compilers may not result in
81
vectorization or give poor performance.
83
<P>Use of an Intel C++ compiler is recommended, but not required (though
84
g++ will not recognize some of the settings, so they cannot be used).
85
The compiler must support the OpenMP interface.
87
<P>The recommended version of the Intel(R) compiler is 14.0.1.106.
88
Versions 15.0.1.133 and later are also supported. If using Intel(R)
89
MPI, versions 15.0.2.044 and later are recommended.
91
<P><B>Building LAMMPS with the USER-INTEL package:</B>
93
<P>You can choose to build with or without support for offload to a
94
Intel(R) Xeon Phi(TM) coprocessor. If you build with support for a
95
coprocessor, the same binary can be used on nodes with and without
96
coprocessors installed. However, if you do not have coprocessors
97
on your system, building without offload support will produce a
100
<P>You can do either in one line, using the src/Make.py script, described
101
in <A HREF = "Section_start.html#start_4">Section 2.4</A> of the manual. Type
102
"Make.py -h" for help. If run from the src directory, these commands
103
will create src/lmp_intel_cpu and lmp_intel_phi using
104
src/MAKE/Makefile.mpi as the starting Makefile.machine:
106
<PRE>Make.py -p intel omp -intel cpu -o intel_cpu -cc icc file mpi
107
Make.py -p intel omp -intel phi -o intel_phi -cc icc file mpi
109
<P>Note that this assumes that your MPI and its mpicxx wrapper
110
is using the Intel compiler. If it is not, you should
111
leave off the "-cc icc" switch.
113
<P>Or you can follow these steps:
117
make yes-user-omp (if desired)
120
<P>Note that if the USER-OMP package is also installed, you can use
121
styles from both packages, as described below.
123
<P>The Makefile.machine needs a "-fopenmp" flag for OpenMP support in
124
both the CCFLAGS and LINKFLAGS variables. You also need to add
125
-DLAMMPS_MEMALIGN=64 and -restrict to CCFLAGS.
127
<P>If you are compiling on the same architecture that will be used for
128
the runs, adding the flag <I>-xHost</I> to CCFLAGS will enable
129
vectorization with the Intel(R) compiler. Otherwise, you must
130
provide the correct compute node architecture to the -x option
133
<P>In order to build with support for an Intel(R) Xeon Phi(TM)
134
coprocessor, the flag <I>-offload</I> should be added to the LINKFLAGS line
135
and the flag -DLMP_INTEL_OFFLOAD should be added to the CCFLAGS line.
137
<P>Example makefiles Makefile.intel_cpu and Makefile.intel_phi are
138
included in the src/MAKE/OPTIONS directory with settings that perform
139
well with the Intel(R) compiler. The latter file has support for
140
offload to coprocessors; the former does not.
142
<P><B>Notes on CPU and core affinity:</B>
144
<P>Setting core affinity is often used to pin MPI tasks and OpenMP
145
threads to a core or group of cores so that memory access can be
146
uniform. Unless disabled at build time, affinity for MPI tasks and
147
OpenMP threads on the host will be set by default on the host
148
when using offload to a coprocessor. In this case, it is unnecessary
149
to use other methods to control affinity (e.g. taskset, numactl,
150
I_MPI_PIN_DOMAIN, etc.). This can be disabled in an input script
151
with the <I>no_affinity</I> option to the <A HREF = "package.html">package intel</A>
152
command or by disabling the option at build time (by adding
153
-DINTEL_OFFLOAD_NOAFFINITY to the CCFLAGS line of your Makefile).
154
Disabling this option is not recommended, especially when running
155
on a machine with hyperthreading disabled.
157
<P><B>Running with the USER-INTEL package from the command line:</B>
159
<P>The mpirun or mpiexec command sets the total number of MPI tasks used
160
by LAMMPS (one or multiple per compute node) and the number of MPI
161
tasks used per node. E.g. the mpirun command in MPICH does this via
162
its -np and -ppn switches. Ditto for OpenMPI via -np and -npernode.
164
<P>If you plan to compute (any portion of) pairwise interactions using
165
USER-INTEL pair styles on the CPU, or use USER-OMP styles on the CPU,
166
you need to choose how many OpenMP threads per MPI task to use. Note
167
that the product of MPI tasks * OpenMP threads/task should not exceed
168
the physical number of cores (on a node), otherwise performance will
171
<P>If LAMMPS was built with coprocessor support for the USER-INTEL
172
package, you also need to specify the number of coprocessor/node and
173
the number of coprocessor threads per MPI task to use. Note that
174
coprocessor threads (which run on the coprocessor) are totally
175
independent from OpenMP threads (which run on the CPU). The default
176
values for the settings that affect coprocessor threads are typically
177
fine, as discussed below.
179
<P>Use the "-sf intel" <A HREF = "Section_start.html#start_7">command-line switch</A>,
180
which will automatically append "intel" to styles that support it. If
181
a style does not support it, an "omp" suffix is tried next. OpenMP
182
threads per MPI task can be set via the "-pk intel Nphi omp Nt" or
183
"-pk omp Nt" <A HREF = "Section_start.html#start_7">command-line switches</A>, which
184
set Nt = # of OpenMP threads per MPI task to use. The "-pk omp" form
185
is only allowed if LAMMPS was also built with the USER-OMP package.
187
<P>Use the "-pk intel Nphi" <A HREF = "Section_start.html#start_7">command-line
188
switch</A> to set Nphi = # of Xeon Phi(TM)
189
coprocessors/node, if LAMMPS was built with coprocessor support. All
190
the available coprocessor threads on each Phi will be divided among
191
MPI tasks, unless the <I>tptask</I> option of the "-pk intel" <A HREF = "Section_start.html#start_7">command-line
192
switch</A> is used to limit the coprocessor
193
threads per MPI task. See the <A HREF = "package.html">package intel</A> command
196
<PRE>CPU-only without USER-OMP (but using Intel vectorization on CPU):
197
lmp_machine -sf intel -in in.script # 1 MPI task
198
mpirun -np 32 lmp_machine -sf intel -in in.script # 32 MPI tasks on as many nodes as needed (e.g. 2 16-core nodes)
200
<PRE>CPU-only with USER-OMP (and Intel vectorization on CPU):
201
lmp_machine -sf intel -pk intel 16 0 -in in.script # 1 MPI task on a 16-core node
202
mpirun -np 4 lmp_machine -sf intel -pk omp 4 -in in.script # 4 MPI tasks each with 4 threads on a single 16-core node
203
mpirun -np 32 lmp_machine -sf intel -pk omp 4 -in in.script # ditto on 8 16-core nodes
205
<PRE>CPUs + Xeon Phi(TM) coprocessors with or without USER-OMP:
206
lmp_machine -sf intel -pk intel 1 omp 16 -in in.script # 1 MPI task, 16 OpenMP threads on CPU, 1 coprocessor, all 240 coprocessor threads
207
lmp_machine -sf intel -pk intel 1 omp 16 tptask 32 -in in.script # 1 MPI task, 16 OpenMP threads on CPU, 1 coprocessor, only 32 coprocessor threads
208
mpirun -np 4 lmp_machine -sf intel -pk intel 1 omp 4 -in in.script # 4 MPI tasks, 4 OpenMP threads/task, 1 coprocessor, 60 coprocessor threads/task
209
mpirun -np 32 -ppn 4 lmp_machine -sf intel -pk intel 1 omp 4 -in in.script # ditto on 8 16-core nodes
210
mpirun -np 8 lmp_machine -sf intel -pk intel 4 omp 2 -in in.script # 8 MPI tasks, 2 OpenMP threads/task, 4 coprocessors, 120 coprocessor threads/task
212
<P>Note that if the "-sf intel" switch is used, it also invokes two
213
default commands: <A HREF = "package.html">package intel 1</A>, followed by <A HREF = "package.html">package
214
omp 0</A>. These both set the number of OpenMP threads per
215
MPI task via the OMP_NUM_THREADS environment variable. The first
216
command sets the number of Xeon Phi(TM) coprocessors/node to 1 (and
217
the precision mode to "mixed", as one of its option defaults). The
218
latter command is not invoked if LAMMPS was not built with the
219
USER-OMP package. The Nphi = 1 value for the first command is ignored
220
if LAMMPS was not built with coprocessor support.
222
<P>Using the "-pk intel" or "-pk omp" switches explicitly allows for
223
direct setting of the number of OpenMP threads per MPI task, and
224
additional options for either of the USER-INTEL or USER-OMP packages.
225
In particular, the "-pk intel" switch sets the number of
226
coprocessors/node and can limit the number of coprocessor threads per
227
MPI task. The syntax for these two switches is the same as the
228
<A HREF = "package.html">package omp</A> and <A HREF = "package.html">package intel</A> commands.
229
See the <A HREF = "package.html">package</A> command doc page for details, including
230
the default values used for all its options if these switches are not
231
specified, and how to set the number of OpenMP threads via the
232
OMP_NUM_THREADS environment variable if desired.
234
<P><B>Or run with the USER-INTEL package by editing an input script:</B>
236
<P>The discussion above for the mpirun/mpiexec command, MPI tasks/node,
237
OpenMP threads per MPI task, and coprocessor threads per MPI task is
240
<P>Use the <A HREF = "suffix.html">suffix intel</A> command, or you can explicitly add an
241
"intel" suffix to individual styles in your input script, e.g.
243
<PRE>pair_style lj/cut/intel 2.5
245
<P>You must also use the <A HREF = "package.html">package intel</A> command, unless the
246
"-sf intel" or "-pk intel" <A HREF = "Section_start.html#start_7">command-line
247
switches</A> were used. It specifies how many
248
coprocessors/node to use, as well as other OpenMP threading and
249
coprocessor options. Its doc page explains how to set the number of
250
OpenMP threads via an environment variable if desired.
252
<P>If LAMMPS was also built with the USER-OMP package, you must also use
253
the <A HREF = "package.html">package omp</A> command to enable that package, unless
254
the "-sf intel" or "-pk omp" <A HREF = "Section_start.html#start_7">command-line
255
switches</A> were used. It specifies how many
256
OpenMP threads per MPI task to use, as well as other options. Its doc
257
page explains how to set the number of OpenMP threads via an
258
environment variable if desired.
260
<P><B>Speed-ups to expect:</B>
262
<P>If LAMMPS was not built with coprocessor support when including the
263
USER-INTEL package, then acclerated styles will run on the CPU using
264
vectorization optimizations and the specified precision. This may
265
give a substantial speed-up for a pair style, particularly if mixed or
266
single precision is used.
268
<P>If LAMMPS was built with coproccesor support, the pair styles will run
269
on one or more Intel(R) Xeon Phi(TM) coprocessors (per node). The
270
performance of a Xeon Phi versus a multi-core CPU is a function of
271
your hardware, which pair style is used, the number of
272
atoms/coprocessor, and the precision used on the coprocessor (double,
275
<P>See the <A HREF = "http://lammps.sandia.gov/bench.html">Benchmark page</A> of the
276
LAMMPS web site for performance of the USER-INTEL package on different
279
<P><B>Guidelines for best performance on an Intel(R) Xeon Phi(TM)
282
<UL><LI>The default for the <A HREF = "package.html">package intel</A> command is to have
283
all the MPI tasks on a given compute node use a single Xeon Phi(TM)
284
coprocessor. In general, running with a large number of MPI tasks on
285
each node will perform best with offload. Each MPI task will
286
automatically get affinity to a subset of the hardware threads
287
available on the coprocessor. For example, if your card has 61 cores,
288
with 60 cores available for offload and 4 hardware threads per core
289
(240 total threads), running with 24 MPI tasks per node will cause
290
each MPI task to use a subset of 10 threads on the coprocessor. Fine
291
tuning of the number of threads to use per MPI task or the number of
292
threads to use per core can be accomplished with keyword settings of
293
the <A HREF = "package.html">package intel</A> command.
295
<LI>If desired, only a fraction of the pair style computation can be
296
offloaded to the coprocessors. This is accomplished by using the
297
<I>balance</I> keyword in the <A HREF = "package.html">package intel</A> command. A
298
balance of 0 runs all calculations on the CPU. A balance of 1 runs
299
all calculations on the coprocessor. A balance of 0.5 runs half of
300
the calculations on the coprocessor. Setting the balance to -1 (the
301
default) will enable dynamic load balancing that continously adjusts
302
the fraction of offloaded work throughout the simulation. This option
303
typically produces results within 5 to 10 percent of the optimal fixed
306
<LI>When using offload with CPU hyperthreading disabled, it may help
307
performance to use fewer MPI tasks and OpenMP threads than available
308
cores. This is due to the fact that additional threads are generated
309
internally to handle the asynchronous offload tasks.
311
<LI>If running short benchmark runs with dynamic load balancing, adding a
312
short warm-up run (10-20 steps) will allow the load-balancer to find a
313
near-optimal setting that will carry over to additional runs.
315
<LI>If pair computations are being offloaded to an Intel(R) Xeon Phi(TM)
316
coprocessor, a diagnostic line is printed to the screen (not to the
317
log file), during the setup phase of a run, indicating that offload
318
mode is being used and indicating the number of coprocessor threads
319
per MPI task. Additionally, an offload timing summary is printed at
320
the end of each run. When offloading, the frequency for <A HREF = "atom_modify.html">atom
321
sorting</A> is changed to 1 so that the per-atom data is
322
effectively sorted at every rebuild of the neighbor lists.
324
<LI>For simulations with long-range electrostatics or bond, angle,
325
dihedral, improper calculations, computation and data transfer to the
326
coprocessor will run concurrently with computations and MPI
327
communications for these calculations on the host CPU. The USER-INTEL
328
package has two modes for deciding which atoms will be handled by the
329
coprocessor. This choice is controlled with the <I>ghost</I> keyword of
330
the <A HREF = "package.html">package intel</A> command. When set to 0, ghost atoms
331
(atoms at the borders between MPI tasks) are not offloaded to the
332
card. This allows for overlap of MPI communication of forces with
333
computation on the coprocessor when the <A HREF = "newton.html">newton</A> setting
334
is "on". The default is dependent on the style being used, however,
335
better performance may be achieved by setting this option
338
<P><B>Restrictions:</B>
340
<P>When offloading to a coprocessor, <A HREF = "pair_hybrid.html">hybrid</A> styles
341
that require skip lists for neighbor builds cannot be offloaded.
342
Using <A HREF = "pair_hybrid.html">hybrid/overlay</A> is allowed. Only one intel
343
accelerated style may be used with hybrid styles.
344
<A HREF = "special_bonds.html">Special_bonds</A> exclusion lists are not currently
345
supported with offload, however, the same effect can often be
346
accomplished by setting cutoffs for excluded atom types to 0. None of
347
the pair styles in the USER-INTEL package currently support the
348
"inner", "middle", "outer" options for rRESPA integration via the
349
<A HREF = "run_style.html">run_style respa</A> command; only the "pair" option is