Category Archives: Amber9

A package of molecular dynamics simulation programs.

DHFR @ OSG

Our first researcher using Amber PMEMD on the OSG reports molecular dynamics are four to eight times faster on the OSG than with the infrastructure she had access to previously. That’s for the all CPU version, i.e. without the Nvidia … Continue reading

Posted in Amber11, Amber9, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

PMEMD for OSG Stats

PMEMD for OSG is live. Gratia statistics for January: All runs are 8-way parallel MPI jobs so we get eight hours of CPU time per hour of wall time.

Posted in Amber9, Compute Grids, condor, Engage VO, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

On the Open Science Grid Trail

The open science grid is a distributed heterogeneous network of computing clusters. Its infrastructure and protocols allow members to submit high throughput compute jobs for remote execution. All use is authenticated and authorized via a PKI infrastructure which associates jobs … Continue reading

Aside | Posted on by | Leave a comment

High Throughput Parallel Molecular Dynamics on OSG

The Goal RENCI’s working with researchers interested in running high throughput parallel molecular dynamics simulations on OSG. Amber9 PMEMD The program we’d like to execute is called PMEMD (Particle Mesh Ewald Molecular Dynamics). PMEMD is a high-performance, parallel component of … Continue reading

Posted in Amber9, Compute Grids, condor, Continuous Integration (CI), Engage VO, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

PMEMD on Blueridge

Baby steps. This is pmemd compiled against native MPI libraries executing on the RENCI Blueridge cluster. The job submission workflow uses the RENCI-CI script library. The job_run script uses Globus tools to transfer the pmemd application and input files to Blueridge. … Continue reading

Posted in Amber9, Compute Grids, Continuous Integration (CI), Engage VO, Globus, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

Eight Way PMEMD

Since my earlier post about building Amber9 I’ve learned that the Amber code base contains a new development stream and a highly parallel performance oriented development stream. The executables these create are called sander and pmemd respectively. The high performance … Continue reading

Posted in Amber9, Engage VO, High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

Parallel Amber9

Amber9 is a molecular dynamics software suite with support, via OpenMPI, for parallel execution. This describes the process of compiling and running the serial and parallel versions on the following machine configuration: Operating System: Ubuntu 10.04 8 x Intel(R) Core(TM) … Continue reading

Posted in Amber9, Compute Grids, grid, multicore, OSG | Leave a comment