GROMACS

GROMACS is a molecular dynamics code designed for the study of big systems like proteins, lipids or nucleic acids. It was installed on Bellatrix in September 2013.

Versions
Version 4.6.3 has been compiled with Intel compilers and the Mvapich2 MPI (there's a version compiled with Intel MPI as well, more info below). This was the latest version and patch level available at the time of installation.

New versions can be installed if needed. Please contact Daniel Jana, the maintainer of GROMACS on Bellatrix, for that.

How to use it
First you have to load the module. You can do it with: $ module load gromacs/4.6.3_intel13_mvapich181

or with: $ module load gromacs/4.6.3_intel13_intelmpi410

The Mvapich2 version should be slightly faster, so you should use it and roll back to Intel MPI only if you have problems.

The modules set the right environment to use GROMACS, including the libraries for the compiler and MPI. Once this is done, you will use GROMACS as usual. You'll probably want to do something like: mdrun -ntmpi 32 -s &>

By default the modules set OMP_NUM_THREADS to 1. Often this is what you need, but if you know what you are doing you can change that value with one of these two: export OMP_NUM_THREADS=x   # to set the value to "x" in bash and related shells setenv OMP_NUM_THREADS x   # to set the value to "x" in csh and related shells

Since parallelization is quite efficient you'll probably want to use a line like:
 * 1) PBS -l select=2:ncpus=16:mpiprocs=16:mem=10000mb

on the top of your script. This specific case asks for two sets of 16 CPU and 16 MPI processes with 10000 MB of RAM. The job, thus, asks for a total of 32 cores and ~20 GB of RAM. Any of these numbers can be changed, depending on your needs.

Do I have access to it?
Since version 4.6.0 GROMACS is licensed with the GNU Lesser General Public License. As such, anyone can use it. There is no restriction to the usage of GROMACS in Bellatrix.