Job Queues

The qstat -Q shows a summary of all Artemis queues. Not all of these can be submitted to directly; some are reserved for users with strategic allocations and many are only accessible via defaultQ. Your job will usually be routed to other “internal” queues after submission depending on the size of your job.

Queues are requested using the following PBS directive:

#PBS -q Queue

Where Queue is the name of the queue you wish to submit your job to. For example, if you want to run your job in “defaultQ”, you would use:

#PBS -q defaultQ

Queues available to all users

The queues available to all Artemis users are listed below:


If you don’t specify a queue, your job will be placed in this queue. Once placed in this queue, your job will be routed to one of the following internal queues based on the resources requested in your PBS script:

  • Small
  • Normal
  • Large
  • highmem (High Memory)
  • GPU

For details about how jobs are routed to these internal queues, see the queue resource limits table.


An “express” queue for small jobs. These jobs will typically start quickly, but will increase your fair share weight 5 times faster than small, normal or large jobs in defaultQ. This queue is useful for quickly testing your jobs before submitting a longer job to defaultQ.


A low priority queue that runs jobs on idle resources in strategic allocations. If your job completes it will be “free”, in that it will not increase your Project’s fair share value. The downside is there is no assurance that your job will finish. If a strategic allocation user submits jobs to their allocation and requires resources that a scavenger job is using, the scavenger job will be suspended. If the scavenger job cannot be resumed within half the requested walltime of the scavenger job, it will be terminated and all unsaved progress will be lost.


This queue grants access to nodes dedicated to input/output (I/O) jobs and can be used to copy data between Artemis, the Research Data Store and other external locations. It can also be used for other I/O intensive tasks, like compressing and archiving data, or simply for moving or copying data around Artemis. Jobs run in this queue will not increase your fair share weight, but we reserve the right to terminate compute jobs running in this queue. For more information about using this queue, see the Data Transfer Queue section.


This queue is for running interactive (command line) sessions on a compute node. Interactive sessions can only be requested using the qsub –I command from the command line. You cannot request an interactive session from a job script. This queue attracts a fair share weighting 10 times higher than standard memory jobs in defaultQ. An example command to submit a job to the interactive queue is:

qsub -I -P PANDORA -l select=1:ncpus=1:mem=4GB,walltime=1:00:00

remembering to replace PANDORA with your abbreviated project name.


This queue is for performing computations with programs that use GPUs (Graphical Processing Units). To use GPUs, request gpus in the select directive, and submit your job to defaultQ:

#PBS -l select=1:ncpus=1:ngpus=1

ngpus should be 1 to 4 per chunk as GPU nodes have a maximum of 4 GPUs.


The fair share weighting of this queue is 5 times higher than defaultQ.

Queue resource limits

Queue Max Walltime Max Cores Per Job Max Cores per User Memory per node (GB) Memory per core (GB) Fair Share Weight
Small 1 day 24 120 < 123 < 20 10
Normal 7 days 96 120 < 123 < 20 10
Large 21 days 288 288 < 123 < 20 10
High Memory 21 days 192 192 123 to 6144 > 20 50
GPU 7 days 252 252 < 185 N/A 50
small-express 12 hours 4 40 < 123 N/A 50
scavenger 2 days 288 288 < 123 N/A 0
dtq 10 days 2 16 < 16 N/A 0
Interactive 4 hours 4 4 < 123 N/A 100
  • The Small, Normal, Large, High Memory and GPU queues are all accessed via defaultQ. You cannot directly request these queues.
  • The interactive queue is requested using the qsub -I command via the command line. You cannot request interactive access with #PBS -q interactive.
  • The maximum number of jobs a user can have queued is 200 in defaultQ, and 10 in small-express.
  • The maximum number of cores one user can simultaneously use is 600.
  • Array jobs are limited to 1000 elements.
  • N/A = Not Applicable.

Strategic Allocations

Some researchers have access to dedicated compute resources on Artemis. Only members of the Chief Investigator’s projects may submit jobs to these queues. Scavenger jobs will run on idle resources in these queues.

Queue Chief Investigator
alloc-dh Professor David Hensher
alloc-jr Professor John Rasko
alloc-nw Dr. Nicholas Williamson
alloc-am Dr. Alejandro Montoya
alloc-md Associate Professor Meredith Jordan
alloc-mc Associate Professor Matthew Cleary
alloc-dm Professor Dietmar Muller
alloc-eh Professor Edward Holmes


Users with strategic allocations are also free to request their jobs be run in all other publically accessible queues in addition to their strategic allocation.


Some researchers have access to condominium compute resources. Only projects nominated by the relevant condominium owner may submit jobs to these queues.

Queue Condominium Owner
condo-civil Associate Professor Luming Shen
alloc-dt Professor Dacheng Tao
alloc-op Professor Olivier Piguet


Users with access to condominiums are also free to request their jobs be run in all other publically accessible queues in addition to their condominium.