User Tools

Site Tools


support:hpc:software:abinit

ABINIT

ABINIT is an open-source package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave basis.

Prerequisites

ABINIT (as it is installed at CSE) requires the following libraries and software:

Installed Versions

We try to support a recent version. Here are the versions that we currently have installed and tested:

  • ABINIT 5.6.4

Running

To load ABINIT you must first load all the prerequisite modules, then load the ABINIT module. To do this in one command type:

module load compilers/pathscale-3.2 \
            mpi/openmpi-1.2.6-pathscale-3.2 \
            blas/acml-4.1.0-pathscale \
            math/fftw-3.1.2 \
            io/netcdf-3.6.3 \
            md/abinit-5.6.4

You should put this command in your submit script. Here is an example:

#!/bin/bash
#
#$ -cwd
#$ -j y
#$ -S /bin/bash
 
module load compilers/pathscale-3.2 \
            mpi/openmpi-1.2.6-pathscale-3.2 \
            blas/scalapack-pathscale \
            blas/acml-4.1.0-pathscale \
            math/fftw-3.1.2 \
            io/netcdf-3.6.3 \
            md/abinit-5.6.4
 
WORKDIR=~/abinit/test/paral
 
# need other data files
cd $WORKDIR
 
mpirun abinip < t_kpt+spin.files

Then change to your data directory and run the following command where N is the number of CPUs you want to use.

$ qsub -pe mpi N submit.sh

Build Notes

The following options were used to configure ABINIT (Place the them in a file called ~/.abinit/build/hostname.ac):

enable_64bit_flags="yes"
prefix="/share/apps/abinit-5.6.4"
CC="mpicc"
with_cc_optflags="-O3"
CXX="mpiCC"
with_cxx_optflags="-O3"
FC="mpif90"
with_fc_optflags="-O3"
with_fc_vendor="pathscale"
enable_mpi="yes"
with_mpi_prefix="/share/apps/openmpi-1.2.6/pathscale-3.2/lib64"
with_mpi_level="2"
with_mpi_cc_libs="-lmpi"
with_mpi_cxx_libs="-lmpi++ -lmpi"
with_mpi_runner="mpirun"
enable_fftw="yes"
enable_fftw_threads="yes"
with_fftw_includes="-I/share/apps/fftw-3.1.2/include"
with_fftw_libs="/share/apps/fftw-3.1.2/lib/libfftw3.a"
with_linalg_libs="-L/share/apps/acml-4.1.0/pathscale64/lib -lacml"
with_linalg_type="acml"
enable_scalapack="no"
enable_netcdf="yes"

There were some changes to the source that we had to make. The calls to MPI_SEND seemed to be wrong (8 args instead of 7 args). We changed all calls to MPI_SEND so that they had the proper signature.

Then just run the usual:

$ module load compilers/pathscale-3.2 \
            mpi/openmpi-1.2.6-pathscale-3.2 \
            blas/acml-4.1.0-pathscale \
            math/fftw-3.1.2 \
            io/netcdf-3.6.3 \
            md/abinit-5.6.4
$ ./configure
$ make -j4
$ cd test && make tests_speed
$ sudo make install

Benchmarks

We ran a few benchmarks that are included with ABINIT.

Parallel Benchmark

We used the input files in: abinit-5.6.4/tests/paral/Input for the following tests.

<format gnuplot> set title “ABINIT Parallel Benchmark” set style line set log x set xtic (1,2,4,8,16) set yrange [0:120] set ytic (0,10,20,30,40,50,60,70,80,90,100,110,120) set xlabel “CPUs” set ylabel “Parallel Efficiency (%)” set grid plot '-' using 1:2 lw 2 with lp title “tribe (% of 758s)”, \

   '-' using 1:2 lw 2 with lp title "urdarbrunnr (% of 755s)"

1 100 2 97.7 4 92.9 8 88.9 16 83.9 e 1 100 2 97.1 4 93.49 8 86.61 16 86.7 e 86.1 </format>

Documentation

The ABINIT user manual is available on their website.

support/hpc/software/abinit.txt · Last modified: 2008/12/17 13:47 by sbeards