Molpro

Molpro is an ab initio Quantum Chemistry code which focuses on highly accurate levels of theory. It was installed on Bellatrix in October 2013 and in Aries in November 2013, following the purchase of a site license for version 2012.

Versions
Version 2012.1 Patch level 5, the latest version and patch level available at the time of installation. The pre-compiled binaries are not installed because in-house testing revealed them to be substantially slower than the locally compiled version.

Aries: Molpro has been compiled with Intel compilers and Intel MPI.

Bellatrix: Molpro has been compiled with Intel compilers and the Mvapich2 MPI.

New versions can be installed if needed. Please contact Daniel Jana, the maintainer of Molpro on Aries/Bellatrix, for that.

How to use it
First you have to load the module. You can do it with: $ module load molpro/2012-intel-intelmpi   # For Aries $ module load molpro/2012-intel-mvapich2   # For Bellatrix

This sets the right environment to use Molpro, including the modules for the compiler and MPI. Molpro by itself is not directly accessible, but a wrapper has been created to set some basic variables. The standard way to run Molpro, depending on the cluster, is with: $ molpro.aries  $ molpro.bellatrix 

Note the absence of the number of CPU (which you'd typically define with "-n #cores"). The script takes care of that for you. In fact, trying to use the -n option will exit the script prematurely. This is done to prevent some basic mistakes that occur often (e.g. you ask for two entire nodes to the scheduler but then tell Molpro to use only 16 cores (1 node) thus keeping an entire node unavailable for nothing). Most of the other variables you'd typically want to give to Molpro are still available (e.g. "--backup", "-o ")

This script can only be ran inside a PBS job.

Since parallelization is quite efficient you'll probably want to use a line like:
 * 1) PBS -l select=2:ncpus=16:mpiprocs=16:mem=10000mb

on the top of your script. This specific case asks for two sets of 16 CPU and 16 MPI processes with 10000 MB of RAM. The job, thus, asks for a total of 32 cores and ~20 GB of RAM. Any of these numbers can be changed, depending on your needs.

In order to keep the temporary files separate from the rest of your job files a directory with the following logic is created: /scratch/$USER/pbs.$PBS_JOBID    # resulting in something like /scratch/myusername/pbs.123456.bellatrix (or aries)

this directory is not deleted at the end of the job (in case you need to extract something from the temporary files). You have to do your own clean up.

Do I have access to it?
It is available to all members of the molpro-soft group. You can check if you are a member of the group by using the command "id". You will see something like: $ id uid=33333(myusername) gid=10111(myunit) groups=65115(dit-clusters-users),65359(bellatrix-users),65845(molpro-soft)

You may have a different set of groups but you need molpro-soft to be there. Trying to load the module without belonging to the group will result in a warning message.

How to get the right to use it?
If you are not part of the molpro-soft group contact Daniel Jana. He'll inform you on how to be proceed.