Parallel Amber9

Amber9 is a molecular dynamics software suite with support, via OpenMPI, for parallel execution. This describes the process of compiling and running the serial and parallel versions on the following machine configuration:

  • Operating System: Ubuntu 10.04
  • 8 x Intel(R) Core(TM) i7 CPU Q720@1.6GHz
  • Fortran compiler: gfortran

Prerequisites:

  • Amber9: The sources are available under licensing terms described at the referenced site. Build instructions are in the associated readme file.
  • OpenMPI:  I used version 1.4.2. Sources are openly available and build instructions are online.
  • GFortran: This is open source and can be installed with the apt package manager on Debian systems.
    • sudo apt-get install gfortran
  • Other: C-Shell and assorted X development libraries. A command like this will work on Ubuntu and other Debian based systems:
    • sudo apt-get install csh libxt-dev libxtst-dev
  • Environment : Set the following environment variable:
$ export AMBERHOME=<install-dir>/amber9

Building: Serial

$ cd ${AMBERHOME}/src
$ ./configure -verbose gfortran
$ make serial

Test Execution: Serial

Test execution is as simple as:

$ make test.serial

The system resource monitor shows that during execution of the serial test, one processor gets the work:

Serial Execution of Amber9 on an 8 core system.

Building OpenMPI

Use the following commands as root to build OpenMPI. Replace <install-base> with a directory of your choosing.

$ tar xzvf openmpi-1.4.2.tar.gz
$ cd openmpi-1.4.2
$ ./configure --prefix=<install-base>/openmpi-1.4.2
$ make all install

This will create an isolated instance of the OpenMPI libraries, header files and executables all under <install-base>. This is a good thing since it’s desirable to have tight control over the exact version of OpenMPI being used with the Amber build.

Building: Parallel

$ export MPI_HOME=<install-dir>/openmpi-1.4.2
$ export DO_PARALLEL="mpirun -np 2"
$ ./configure -verbose -openmpi gfortran
$ sed -e "s,FC=.*$,FC=$MPI_HOME/bin/mpif90," config.h > config.h.fixed
$ mv config.h.fixed config.h
$ make parallel
 

Test Execution: Parallel

Execute the parallel tests with the following commands:

$ export DO_PARALLEL="mpirun -np 2"
$ make test.parallel

The system resource monitor shows that during parallel execution, multiple processors are used. With the number of processors set to 2 the system monitor looks like this:

Two Way Parallel Amber9 on an 8 Core System

With 8 processors selected it looks like this:

Eight Way Parallel Amber9 on an 8 Core System

Not all the tests are configured to run with an arbitrary number of processors. This output, for example, was seen running with 8 processors:

 

==============================================================
cd LES_GB; ./Run.LESSANDER: LES+GB1: GB/LES GB1 diffcoords
————————————————————————–
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLDwith errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending onexactly when Open MPI kills them.
————————————————————————–
Must have more residues than processors!
Must have more residues than processors!
Must have more residues than processors!
Must have more residues than processors!
Must have more residues than processors!
Must have more residues than processors!
Must have more residues than processors!
————————————————————————–
mpirun has exited due to process rank 0 with PID 32242 onnode scox exiting without calling “finalize”.
This mayhave caused other processes in the application to beterminated by signals sent by mpirun (as reported here).
————————————————————————–
[scox:32241] 7 more processes have sent help message help-mpi-api.txt / mpi-abort[scox:32241]
Set MCA parameter “orte_base_help_aggregate” to 0 to see all help / error messages./Run.LES:
Program error
make[1]: *** [test.sander.LES] Error 1
make[1]: Leaving directory `/home/scox/app/amber9/test’
make: *** [test.sander.LES.MPI] Error 2
I ran the parallel tests with number of processors set to 2 to get a successful run. More reading to be done on this subject.

 

This entry was posted in Amber9, Compute Grids, grid, multicore, OSG. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s