Submitting Jobs to Artemis

Artemis uses a modified version of PBS Pro to manage and queue jobs. Some minimal example scripts are provided below.

Serial jobs (1 core)

An example single core script:

#!/bin/bash
#PBS -P PANDORA                   # your project name
#PBS -l select=1:ncpus=1:mem=4GB  # select one chunk with 1 core and 4 GB memory
#PBS -l walltime=10:00:00         # actual time your job will run for

cd $PBS_O_WORKDIR
my_program > results.out

Save this script to a file (for example MyScript.pbs), then then submit it to the PBS scheduler using qsub:

[abcd1234@login1 ~] qsub MyScript.pbs

there are other #PBS directives that you may optionally use. For a full list, see the PBS Professional user manual.

Parallel jobs using OpenMP

OpenMP is a shared memory parallelism model, so OpenMP parallelised programs can only be run on one chunk (a “chunk” is a virtual node) with up to 24, 32 or 64 cores, depending on the node.

#!/bin/bash
#PBS -P PANDORA                       # your project name
#PBS -l select=1:ncpus=4:mem=4GB      # select one chunk with 4 CPUs and 4 GB memory
#PBS -l walltime=10:00:00             # actual time your job will run for

cd $PBS_O_WORKDIR
my_program > results.out

Note

OMP_NUM_THREADS is automatically set to the number of cores per chunk on Artemis, so you don’t have to manually set the value of OMP_NUM_THREADS in your PBS script.

Parallel jobs using MPI

MPI (Message Passing Interface) allows jobs to communicate across several nodes. A basic MPI job script is shown below.

#!/bin/bash
#PBS -P PANDORA                                 # your project name
#PBS -l select=1:ncpus=4:mpiprocs=4:mem=4GB     # select one chunk with 4 cores, all of which are available to MPI, and 4 GB memory
#PBS -l walltime=10:00:00                       # actual time your job will run for

cd $PBS_O_WORKDIR                               # change to current working directory
module load siesta                              # Also automatically loads the intel-mpi module
mpirun -np 4 siesta < h2o.fdf > h2o.out

The above job script requests one chunk of compute resources with 4 cores and 4 GB of memory. All 4 cores have been made available to MPI as specified by the mpiprocs=4 option. It is recommended to set ncpus and mpiprocs to the same value, unless you know you can set them differently.

Note

Load the correct MPI implementation for your program. Most modules will automatically load the appropriate MPI implementation. However, you can load your own MPI implementation if you are compiling your own MPI programs or if you know you can use an alternative MPI implementation.

With MPI, you can request more than one chunk of compute resources:

#!/bin/bash
#PBS -P PANDORA                                # your project name
#PBS -l select=5:ncpus=8:mpiprocs=8:mem=4GB    # select 5 chunks with 8 cores and 4 GB memory *per chunk*
#PBS -l walltime=10:00:00                      # actual time your job will run for

cd $PBS_O_WORKDIR                              # change to current working directory
module load siesta                             # Also automatically loads the intel-mpi module
mpirun -np 40 siesta < h2o.fdf > h2o.out

This script requests 5 chunks with 8 cores (all cores are available to MPI) and 4 GB of memory per chunk, meaning your job will be allocated 40 cores and 20 GB of memory in total.