Main Page

From GPUMD
Revision as of 00:14, 6 January 2020 by Brucefan (talk | contribs) (Prerequisites)
Jump to navigation Jump to search

!!! Warnings !!!

  • This website documents the developing version of GPUMD.
  • Because both the code and the documentation are developing, they can be inconsistent before a new release.
  • We suggest use the latest released version, which is GPUMD-v2.4.1 now.
  • For each released version, there is a corresponding user manual in a PDF file.
  • We expect to release GPUMD-v2.5 by the end of February 2020.

What is GPUMD?

  • GPUMD stands for Graphics Processing Units Molecular Dynamics. It is a molecular dynamics (MD) code fully implemented on graphics processing units (GPU). It is written in CUDA C/C++ and requires to run on a CUDA-enabled Nvidia GPU of compute capability no less than 3.5.
  • It is super fast. It is highly efficient for doing MD simulations with many-body potentials such as the Tersoff potential. Using a single powerful GPU such as Tesla P100, it can run 100 MD steps for a one-million-atom system within one second. See the tutorials for details.
  • Good for MD simulations with many-body potentials. We have implemented quite a few many-body potentials as well as a few two-body potentials. One can also define multiple potentials for a complicated system.
Potentials implemented in GPUMD
The Tersoff-1989 potential The Tersoff-1988 potential
The Tersoff-mini potential The embedded atom method (EAM) potential
The Stillinger-Weber potential The Vashishita potential
The REBO-LJ potential for Mo-S systems The Lennard-Jones potential
The Buckingham-Coulomb potential The force constant potential
  • GPUMD supports free and periodic boundary conditions in each direction. The fixed boundary conditions can also be realized by fixing some atoms. Both orthogonal and triclinic boxes are supported.
  • GPUMD builds the neighbor list on the GPU efficiently. However, one can also disable (the default) updating the neighbor list. When the neighbor list is required to be updated during a run, one only has to specify a skin distance and the code will automatically determine when the neighbor list needs to be updated.
  • The velocity-Verlet integration scheme is used for all the ensembles. Within this integration scheme, one can use the NVE ensemble, the NVT ensemble, and the NPT ensemble. For the NVT ensemble, the Berendsen thermostat, the Nose-Hoover chain thermostat, the Langevin thermostat, and the Bussi-Donadio-Parrinello thermostat have been implemented. For the NPT ensemble, only the Berendsen barostat is implemented.

Download and installation

Download

  • All the releases of GPUMD can be downloaded from GitHub.
  • The latest stable release is GPUMD-v2.4.1.
  • The documentation on this site is targeted at the developing version. For each released version, there is a corresponding user manual in the PDF format.

Installation

Prerequisites

  • Hardware: You need to have an Nvidia GPU card with compute capability no less than 3.5.
  • Software:
    • A CUDA toolkit 9.0 or newer.
    • In Linux system, you need a relatively new g++ compiler.
    • In Windows system,
      • We recommend to install MinGW, which contains a g++.exe compiler and the make.exe program that we need.
      • Another option is to install Microsoft Visual Studio (we only need the C++ developing tools). Specifically, we need the cl.exe compiler. In this case, we also need a 64-bit version of make.exe, which can be downloaded here: http://www.equation.com/servlet/equation.cmd?fa=make.

Compile the gpumd executable

  • To build gpumd, one just needs to go to the src directory and type make. When the compilation finishes, an executable named gpumd will be generated in the src directory.
  • In the prepared makefile, we have given a compiling option
     CFLAGS = -std=c++11 -O3 -arch=sm_35 -DDEBUG 
    • If you want to have simulations with different initial velocities for different runs, remove the -DDEBUG option.
    • The options -std=c++11 and -O3 will be used by the host C++ compiler only. If you are using the cl.exe compiler from Visual Studio, you can remove the flag -std=c++11 because C++11 features are enabled by default for cl.exe. You can also add the flag
      -Xcompiler ''/wd 4819''
      to suppress many warnings related to unicode.
    • The -arch=sm_35 option is for nvcc and is equivalent to
 
    -gencode arch=compute_35,code=sm_35 \
    -gencode arch=compute_35,code=compute_35 \

The first line will be used to generate a cubin file targeted at GPUs with a real compute capability 3.5. The second line will be used to generate a PTX file with a virtual GPU architecture of compute capability 3.5. The cubin file can directly run in Kepler GPUs with compute capability no less than 3.5 (such as Tesla K40 and K80). If the code is running on GPUs with higher architectures, the PTX file will be just-in-time compiled to generate cubin files. So the compiling option -arch=sm_35 ensures that the compiled code can run on any GPU with compute capability no less than 3.5. That said, it might be not optimal to generate cubin files from a PTX file with a low virtual architecture. If you are using CUDA 10.1, you can change CFLAGS to:

   CFLAGS = -std=c++11 -O3 \
       -gencode arch=compute_35,code=sm_35 \
       -gencode arch=compute_50,code=sm_50 \
       -gencode arch=compute_60,code=sm_60 \
       -gencode arch=compute_70,code=sm_70 \
       -gencode arch=compute_75,code=sm_75 \
       -gencode arch=compute_75,code=compute_75

Compile the phonon executable

  • To build phonon, one just needs to go to the src directory and type make -f makefile.phonon. When the compilation finishes, an executable named phonon will be generated in the src directory.
  • Because we have used the cuSolver library, it requires a relatively new CUDA version. We have tested it on CUDA 9.0 and CUDA 10.0.

Test GPUMD

  • Go to the directory where you can see src.
  • Type src/gpumd < examples/input_gpumd.txt to run the examples in examples/gpumd.
  • Type src/phonon < examples/input_phonon.txt to run the examples in examples/phonon.

User Manual

Run the executables

  • After installing GPUMD, one should have two executables, src/gpumd and src/phonon.
  • To run either executable, one has to prepare some input files and a driver input file, which is used to do one or more simulations using a single launch of an executable.
    • The driver input file should have the following format:
   number_of_simulations
   path_1
   path_2
   ...

Here number_of_simulations is the number of individual simulations you want to run within a single launch (by the operating system) of the executable (src/gpumd or src/phonon) and path_n is the path of the directory containing the actual input files for the n-th simulation.

    • Suppose that the driver input file is named as driver_input_file and is in the directory where we can see the src directory, we can run the src/gpumd executable using the following command:
   src/gpumd < driver_input_file

Output files will be created in the folders containing the corresponding input files. The src/phonon executable can be run in a similar way.

    • Example 1. Consider a driver input file which reads
   1
   examples/ex1

This means that there will be one simulation and the actual input files for this simulation are prepared in the directory examples/ex1. Here the relative path is used, but one can also use the absolute path.

    • Example 2. Consider another driver input file:
   4
   examples/ex1
   examples/ex2
   examples/ex3
   examples/ex4

In this case, it means that four sets of inputs will be processed consecutively. There is no limit on the number of simulations (directories) in the driver input file.

    • Example 3. If you want to do 10 independent calculations for the same examples/ex3 directory, the driver input file can be:
   10
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3
   examples/ex3

This usage is very common in studies where one has to make an ensemble average over many independent simulations. For example, calculating transport coefficients using a Green-Kubo relation usually requires to do many independent simulations, each with different initial conditions. In GPUMD, results from the current simulation will always append to appropriate output files if they exist.

    • Example 4. If the driver input file reads
   2
   examples/ex1
   examples/ex2
   examples/ex3
   examples/ex4

The code will only run the first two simulations, ignoring the remaining ones.

    • Example 5. If the driver input file reads
   4
   examples/ex1
   examples/ex2

The code will first run the first two simulations, and will report an error message and exit when it attempts to read more input files.

    • One has to make sure that the actual input files exist. Otherwise, the code will report an error message complaining that some file cannot be open and exit.
    • If you don't like to use such a driver input file and just want to run one simulation starting from the folder containing the input files, you can use the following command to run:
   echo '1 ./' | ../src/gpumd

Inputs and outputs

Inputs for src/gpumd

  • To run one simulation using the src/gpumd executable, one has to prepare at least two input files:
The input files for src/gpumd
Input filename Brief description
xyz.in Define the simulation model
run.in Define the simulation protocol
  • The run.in file is used to define the simulation protocol for gpumd. The code will first check the whole file. If there is any invalid item in this file, the code will report an error message and exit. In this input file, blank lines and lines starting with # are ignored. One can thus write comments after #. All the other lines should be of the following form:
keyword parameter_1 parameter_2 ...
  • Here is the complete list of the keywords:
The keywords for gpumd
Keyword Brief description
velocity Set up the initial velocities with a given temperature
potential_definition Sets up how the user wants to assign potentials to atoms
potential Set up a single potential
ensemble Specify an integrator for a run
time_step Specify the time step for integration
neighbor Require neighbor list updating
fix Fix (freeze) some atoms
deform Deform the simulation box
dump_thermo Dump some thermodynamic quantities
dump_position Dump the atom positions
dump_restart Dump a restart file
compute Compute some time- and space-averaged quantities
compute_shc Calculate spectral heat current
compute_dos Calculate the phonon density of states (PDOS)
compute_sdc Calculate the self diffusion coefficient (SDC)
compute_hac Calculate thermal conductivity using the EMD method
compute_hnemd Calculate thermal conductivity using the HNEMD method
run Run a number of steps
  • The overall structure of a run.in file is as follows:
    • First, set up the initial velocities using the velocity keyword and set up the potential model using the potential and optionally the potential_definition keyword.
    • Specify an integrator using the ensemble keyword and optionally add keywords to further control the evolution and measurement processes.
    • Use the keyword run to run a number of steps according to the above settings.
    • One can repeat the above two steps.

Outputs for src/gpumd

The output files for src/gpumd
Output filename Generated by Brief description Output mode
thermo.out dump_thermo Some global thermodynamic quantities Append
movie.xyz dump_position Trajectory (atom positions) Append
restart.out dump_restart The restart file Overwrite
compute.out compute Time and space (group) averaged quantities Append
hac.out compute_hac Thermal conductivity data from the EMD method Append
kappa.out compute_hnemd Thermal conductivity data from the HNEMD method Append
shc.out compute_shc Spectral heat current data Append
dos.out

mvac.out

compute_dos Phonon density of states data Append
sdc.out compute_sdc Self diffusion coefficient data Append

Inputs for src/phonon

  • To run one simulation using the src/phonon executable, one has to prepare at least four input files:
The input files for src/phonon
Input filename Brief description
xyz.in Define the simulation model
basis.in Define the mapping from the atom label to the basis label
kpoints.in Specify the k-points
phonon.in Define the simulation protocol


  • The phonon.in file is used to define the simulation protocol for sr/phonon. In this input file, blank lines and lines starting with # are ignored. One can thus write comments after #. All the other lines should be of the following form:
   keyword parameter_1 parameter_2 ...
  • Here is the complete list of the keywords:
The keywords for src/phonon
Keyword Brief description
potential Set up a single potential
potential_definition Sets up how the user wants to assign potentials to atoms
cutoff The cutoff distance used for calculating the force constants
delta The finite displacement used in calculating the force constants
  • The overall structure of a phonon.in file is as follows:
    • First, set up the potential model using the potential and optionally the potential_definition keyword.
    • Then use the other keywords to set up some parameters.

Outputs for src/phonon

The output files for src/phonon
Output filename Brief description Output mode
D.out Dynamical matrices [math]D(\vec{k})[/math] for the input k points Overwrite
omega2.out Phonon frequency square [math]\omega^2(\vec{k})[/math] for the input k points Overwrite

Tutorials

Tutorials for gpumd

Tutorials for gpumd
Tutorial name Brief description
Tutorial: Thermal expansion Study thermal expansion of silicon crystal from 100 K to 1000 K
Tutorial: Density of states Calculate the vibrational density of states of graphene at 300 K
Tutorial: Thermal conductivity from EMD Calculate the lattice thermal conductivity of graphene at 300 K using the EMD method
Tutorial: Thermal transport from NEMD and HNEMD Calculate the spectral conductance and conductivity of graphene using the NEMD, HNEMD, and spectral decomposition methods

Tutorials for phonon

Tutorials for phonon
Tutorial name Brief description
Tutorial: Phonon dispersion Calculate the phonon dispersion of silicon crystal

Want to understand GPUMD? Study the theoretical formulations

Here are the Theoretical formulations of GPUMD.

Have questions? Using the mailing list

Want to contribute?

  • The GPUMD code was first developed by Zheyong Fan (Postdoc at Aalto University; brucenju(at)gmail.com) and his colleagues Ville Vierimaa (Previously at Aalto University), Mikko Ervasti (Previously at Aalto University), and Ari Harju (Previously at Aalto University) during 2012-2017. In 2018, Alexander J. Gabourie (PhD candidate at Stanford University; gabourie(at)stanford.edu) joined in and he is now an active developer.
  • If you want to become a developer of GPUMD, please note the following:
    • Check the issues in the Gitbub page of GPUMD.
    • We want to keep GPUMD as a standalone code. So we only use standard C, C++ and CUDA libraries.
    • correctness = efficiency > clarity > flexibility.
    • Some coding styles we tried to follow:
Coding styles for GPUMD
Item Style
Naming Use snake_case instead of CamelCase
Variable definition Define variables as late as possible
Indentation Use four spaces (instead of tab) to indent.
Line width Try to keep every line no longer than 80 characters, but not stick to this if a longer line looks better.
Brace placement Use the Allman style. An example is
for (int n = 0; n < 10; ++n) 
{
    // do something
}
Function with many arguments Use a style similar to the Allman style for brace placement:
my_function
(
    argument_1, argument_2, argument_3, argument_4, 
    argument_5, argument_6, argument_7, argument_8
);
  • Units system adopted
    • We use the following basic units
      • Energy: eV (electron volt)
      • Length: A (angstrom)
      • Mass: amu (atomic mass unit)
      • Temperature: K (kelvin)
      • Charge: e (elementary charge)
    • The units for all the quantities are thus fixed.
    • One only needs to make units conversions when dealing with inputs and outputs; all the quantities should be defined in the above units system elsewhere.

Acknowledgements