Category Archives: pmemd

DHFR @ OSG

Our first researcher using Amber PMEMD on the OSG reports molecular dynamics are four to eight times faster on the OSG than with the infrastructure she had access to previously. That’s for the all CPU version, i.e. without the Nvidia … Continue reading

Posted in Amber11, Amber9, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

Amber11 – PMEMD for NVIDIA GPGPU

Molecular Dynamics Proteins are important and their structure complex.  And then they move. The way they move determines how organisms work … or fail. The shape of the protein determines its function so motion means its shape is in flux … Continue reading

Posted in Amber11, Compute Grids, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd | Leave a comment

PMEMD for OSG Stats

PMEMD for OSG is live. Gratia statistics for January: All runs are 8-way parallel MPI jobs so we get eight hours of CPU time per hour of wall time.

Posted in Amber9, Compute Grids, condor, Engage VO, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

High Throughput Parallel Molecular Dynamics on OSG

The Goal RENCI’s working with researchers interested in running high throughput parallel molecular dynamics simulations on OSG. Amber9 PMEMD The program we’d like to execute is called PMEMD (Particle Mesh Ewald Molecular Dynamics). PMEMD is a high-performance, parallel component of … Continue reading

Posted in Amber9, Compute Grids, condor, Continuous Integration (CI), Engage VO, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

PMEMD on Blueridge

Baby steps. This is pmemd compiled against native MPI libraries executing on the RENCI Blueridge cluster. The job submission workflow uses the RENCI-CI script library. The job_run script uses Globus tools to transfer the pmemd application and input files to Blueridge. … Continue reading

Posted in Amber9, Compute Grids, Continuous Integration (CI), Engage VO, Globus, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

Eight Way PMEMD

Since my earlier post about building Amber9 I’ve learned that the Amber code base contains a new development stream and a highly parallel performance oriented development stream. The executables these create are called sander and pmemd respectively. The high performance … Continue reading

Posted in Amber9, Engage VO, High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment