Category Archives: multicore

NAMD with PBS and Infiniband on NERSC Dirac

Overview NAMD simulates molecular motion, especially of large molecules so it’s often used to simulate molecular docking problems. One particularly interesting class of docking problem is the interaction of protein molecules with other molecules such as the cell membrane. The … Continue reading

Posted in Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, NAMD, OSG, Uncategorized | Leave a comment

Protected: Grayson – Science Workflow on the Hybrid Grid

There is no excerpt because this is a protected post.

Posted in Compute Grids, condor, Engage VO, GPGPU, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, RENCI, Uncategorized | Tagged

DHFR @ OSG

Our first researcher using Amber PMEMD on the OSG reports molecular dynamics are four to eight times faster on the OSG than with the infrastructure she had access to previously. That’s for the all CPU version, i.e. without the Nvidia … Continue reading

Posted in Amber11, Amber9, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

Amber11 – PMEMD for NVIDIA GPGPU

Molecular Dynamics Proteins are important and their structure complex.  And then they move. The way they move determines how organisms work … or fail. The shape of the protein determines its function so motion means its shape is in flux … Continue reading

Posted in Amber11, Compute Grids, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd | Leave a comment

PMEMD for OSG Stats

PMEMD for OSG is live. Gratia statistics for January: All runs are 8-way parallel MPI jobs so we get eight hours of CPU time per hour of wall time.

Posted in Amber9, Compute Grids, condor, Engage VO, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

On the Open Science Grid Trail

The open science grid is a distributed heterogeneous network of computing clusters. Its infrastructure and protocols allow members to submit high throughput compute jobs for remote execution. All use is authenticated and authorized via a PKI infrastructure which associates jobs … Continue reading

Aside | Posted on by | Leave a comment

High Throughput Parallel Molecular Dynamics on OSG

The Goal RENCI’s working with researchers interested in running high throughput parallel molecular dynamics simulations on OSG. Amber9 PMEMD The program we’d like to execute is called PMEMD (Particle Mesh Ewald Molecular Dynamics). PMEMD is a high-performance, parallel component of … Continue reading

Posted in Amber9, Compute Grids, condor, Continuous Integration (CI), Engage VO, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment