Eight Way PMEMD

Since my earlier post about building Amber9 I’ve learned that the Amber code base contains a new development stream and a highly parallel performance oriented development stream. The executables these create are called sander and pmemd respectively.

The high performance version is the one I need.

I’ve added tools for building pmemd to the amber tools component of RENCI-CI. While the tools currently assume one of a large range of possible configurations, the one it implements is not a bad one.

Three key variables govern the configuration of a pmemd build:

  • Platform: A combination of the operating system and hardware architecture of the execution target. I used linux_em64t for my first build.
  • Compiler: Various fortran compilers are supported. I used the Intel Fortran compiler, version 11. Although the Amber9 source probably pre-dates the Intel Fortran 11 release, pmemd compiles and passes all its tests built with 11.
  • MPI Library: Again, a wide variety is supported. I used mpich2. This choice was based on a recommendation from the developer of pmemd so I’m just not going to argue with that.

At a high level, the steps I took building pmemd are:

  • Install MPI Libraries. It turns out the most recent version of MPICH2 has a bug that crashes pmemd. I first installed mpich2-1.2.1p1.  That fails the tests every time with issues like those described here. So I fell back to mpich2-1.1.1p1 which works.
  • Use Intel Fortran. Select the Fortran compiler. While debates on this are far ranging, there seems to be a significant contingent that believe it provides better performance.
  • Build. The core of this step consists of running configure with:
    • bash configure linux_em64t ifort mpich2
  • Test. In part because of the use of mpich2, this step is somewhat involved. mpich2 requires that a daemon called mpd be running. This is started with mpdboot before running the tests and shutdown after. The tests are timed.
[scox@scox:~]$ renci_ci_amber_tools
[scox@scox:~]$ mpich2_install
[scox@scox:~]$ amber_fortran --ifort
[scox@scox:~]$ amber_install_pmemd
[scox@scox:~]$ amber_test_pmemd --np=8

The commands above produce an enormous amount of output of course.

Here’s how my first pmemd build behaves running the test suite on an eight core machine via the following command:

[scox@scox:~]$ amber_test_pmemd -np=2

And here it is with

[scox@scox:~]$ amber_test_pmemd -np=8

The eight way version is consistently about twenty percent faster.

This entry was posted in Amber9, Engage VO, High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s