Author Archives: stevencox

DHFR @ OSG

Our first researcher using Amber PMEMD on the OSG reports molecular dynamics are four to eight times faster on the OSG than with the infrastructure she had access to previously. That’s for the all CPU version, i.e. without the Nvidia … Continue reading

Posted in Amber11, Amber9, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

Amber11 – PMEMD for NVIDIA GPGPU

Molecular Dynamics Proteins are important and their structure complex.  And then they move. The way they move determines how organisms work … or fail. The shape of the protein determines its function so motion means its shape is in flux … Continue reading

Posted in Amber11, Compute Grids, Engage VO, GPGPU, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd | Leave a comment

CampusFactory at NCSA Lincoln

I recently set up CampusFactory on NCSA Lincoln to flock jobs from the new Engage submit node. The CampusFactory is a Condor job submitted to a personal condor instance. The job executes the Factory which is implemented as a Python … Continue reading

Posted in Uncategorized | Leave a comment

PMEMD for OSG Stats

PMEMD for OSG is live. Gratia statistics for January: All runs are 8-way parallel MPI jobs so we get eight hours of CPU time per hour of wall time.

Posted in Amber9, Compute Grids, condor, Engage VO, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI, Uncategorized | Leave a comment

Cluster Aware Amber PMEMD – Beta

There’s a new component that bundles Amber 9 PMEMD for execution in OSG’s emerging HTPC model. It’s a package of binaries and scripts which will eventually hide the details of HTPC job submission. Install These steps only need to be … Continue reading

Posted in Uncategorized | Leave a comment

The open science grid is a distributed heterogeneous network of computing clusters. Its infrastructure and protocols allow members to submit high throughput compute jobs for remote execution. All use is authenticated and authorized via a PKI infrastructure which associates jobs … Continue reading

Posted on by stevencox | Leave a comment

Engage Submit Host Architecture

Engage users log in to the submit host to run jobs on the OSG. It needs to move to a platform with better administrative support. It’s time to do some archaeology to find out what’s there then build a new … Continue reading

Posted in Compute Grids, condor, Engage VO, Globus, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), OSG, RENCI | Leave a comment

High Throughput Parallel Molecular Dynamics on OSG

The Goal RENCI’s working with researchers interested in running high throughput parallel molecular dynamics simulations on OSG. Amber9 PMEMD The program we’d like to execute is called PMEMD (Particle Mesh Ewald Molecular Dynamics). PMEMD is a high-performance, parallel component of … Continue reading

Posted in Amber9, Compute Grids, condor, Continuous Integration (CI), Engage VO, grid, High Throughput Computing (HTC), High Throughput Parallel Computing (HTPC), multicore, OSG, pmemd, RENCI | Leave a comment

Virtualizing Engage Central

Well, this is just not going to be an interesting post to most of you. That’s just the way it is sometimes. The OSG Matchmaker is still an important part of OSG job submission infrastructure for many users. One part … Continue reading

Posted in Compute Grids, condor, Engage VO, High Throughput Computing (HTC), OSG, RENCI | Leave a comment

Continuous Integration Stack

This post has moved to the new continuous integration blog.

Posted in Continuous Integration (CI) | Leave a comment