4
4
the Benchmark section of the LAMMPS documentation, and on the
5
5
Benchmark page of the LAMMPS WWW site (lammps.sandia.gov/bench).
7
This directory also has 2 sub-directories:
9
GPU GPU versions of 3 of these benchmarks
10
POTENTIALS benchmarks scripts for various potentials in LAMMPS
12
The scripts and results in both of these directories are discussed on
13
the Benchmark page of the LAMMPS WWW site (lammps.sandia.gov/bench) as
14
well. The two directories have their own README files which you
15
should refer to before running the scripts.
17
The remainer of this file refers to the 5 problems in the top-level
18
of this directory and how to run them on CPUs, either in serial
7
This directory also has several sub-directories:
9
FERMI benchmark scripts for desktop machine with Fermi GPUs (Tesla)
10
KEPLER benchmark scripts for GPU cluster with Kepler GPUs
11
POTENTIALS benchmarks scripts for various potentials in LAMMPS
13
The results for all of these benchmarks are displayed and discussed on
14
the Benchmark page of the LAMMPS WWW site: lammps.sandia.gov/bench.
16
The remainder of this file refers to the 5 problems in the top-level
17
of this directory and how to run them on CPUs, either in serial or
18
parallel. The sub-directories have their own README files which you
19
should refer to before running those scripts.
21
21
----------------------------------------------------------------------
72
72
----------------------------------------------------------------------
74
Here is a src/Make.py command which will perform a parallel build of a
75
LAMMPS executable "lmp_mpi" with all the packages needed by all the
76
examples. This assumes you have an MPI installed on your machine so
77
that "mpicxx" can be used as the wrapper compiler. It also assumes
78
you have an Intel compiler to use as the base compiler. You can leave
79
off the "-cc mpi wrap=icc" switch if that is not the case. You can
80
also leave off the "-fft fftw3" switch if you do not have the FFTW
81
(v3) installed as an FFT package, in which case the default KISS FFT
85
Make.py -j 16 -p none molecule manybody kspace granular orig \
86
-cc mpi wrap=icc -fft fftw3 -a file mpi
88
----------------------------------------------------------------------
74
90
Here is how to run each problem, assuming the LAMMPS executable is
75
named lmp_foo, and you are using the mpirun command to launch parallel
91
named lmp_mpi, and you are using the mpirun command to launch parallel
78
94
Serial (one processor runs):
86
102
Parallel fixed-size runs (on 8 procs in this case):
88
mpirun -np 8 lmp_foo < in.lj
89
mpirun -np 8 lmp_foo < in.chain
90
mpirun -np 8 lmp_foo < in.eam
91
mpirun -np 8 lmp_foo < in.chute
92
mpirun -np 8 lmp_foo < in.rhodo
104
mpirun -np 8 lmp_mpi < in.lj
105
mpirun -np 8 lmp_mpi < in.chain
106
mpirun -np 8 lmp_mpi < in.eam
107
mpirun -np 8 lmp_mpi < in.chute
108
mpirun -np 8 lmp_mpi < in.rhodo
94
110
Parallel scaled-size runs (on 16 procs in this case):
96
mpirun -np 16 lmp_foo -var x 2 -var y 2 -var z 4 < in.lj
97
mpirun -np 16 lmp_foo -var x 2 -var y 2 -var z 4 < in.chain.scaled
98
mpirun -np 16 lmp_foo -var x 2 -var y 2 -var z 4 < in.eam
99
mpirun -np 16 lmp_foo -var x 4 -var y 4 < in.chute.scaled
100
mpirun -np 16 lmp_foo -var x 2 -var y 2 -var z 4 < in.rhodo.scaled
112
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 < in.lj
113
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 < in.chain.scaled
114
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 < in.eam
115
mpirun -np 16 lmp_mpi -var x 4 -var y 4 < in.chute.scaled
116
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 < in.rhodo.scaled
102
118
For each of the scaled-size runs you must set 3 variables as -var
103
119
command line switches. The variables x,y,z are used in the input