Job Queues

The qstat -Q shows a summary of all Artemis queues. Not all of these can be submitted to directly; some are reserved for users with strategic allocations and many are only accessible via defaultQ. Your job will usually be routed to other “internal” queues after submission depending on the size of your job.

Queues are requested using the following PBS directive:

#PBS -q Queue

Where Queue is the name of the queue you wish to submit your job to. For example, if you want to run your job in “defaultQ”, you would use:

#PBS -q defaultQ

Queues available to all users

The queues available to all Artemis users are listed below:

defaultQ

If you don’t specify a queue, your job will be placed in this queue. Once placed in this queue, your job will be routed to one of the following internal queues based on the resources requested in your PBS script:

  • Small
  • Normal
  • Large
  • highmem (High Memory)
  • GPU

For details about how jobs are routed to these internal queues, see the queue resource limits table.

small-express

An “express” queue for small jobs. These jobs will typically start quickly, but will increase your fair share weight 5 times faster than small, normal or large jobs in defaultQ. This queue is useful for quickly testing your jobs before submitting a longer job to defaultQ.

scavenger

A low priority queue that runs jobs on idle resources in strategic allocations. If your job completes it will be “free”, in that it will not increase your Project’s fair share value. The downside is there is no assurance that your job will finish. If a strategic allocation user submits jobs to their allocation and requires resources that a scavenger job is using, the scavenger job will be suspended. If the scavenger job cannot be resumed within half the requested walltime of the scavenger job, it will be terminated and all unsaved progress will be lost.

dtq

This queue grants access to nodes dedicated to input/output (I/O) jobs and can be used to copy data between Artemis, the Research Data Store and other external locations. It can also be used for other I/O intensive tasks, like compressing and archiving data, or simply for moving or copying data around Artemis. Jobs run in this queue will not increase your fair share weight, but we reserve the right to terminate compute jobs running in this queue. For more information about using this queue, see the Data Transfer Queue section.

interactive

This queue is for running interactive (command line) sessions on a compute node. Interactive sessions can only be requested using the qsub –I command from the command line. You cannot request an interactive session from a job script. This queue attracts a fair share weighting 10 times higher than standard memory jobs in defaultQ. An example command to submit a job to the interactive queue is:

qsub -I -P PANDORA -l select=1:ncpus=1:mem=4GB,walltime=1:00:00

remembering to replace PANDORA with your abbreviated project name.

gpu

This queue is for performing computations with programs that use GPUs (Graphical Processing Units). To use GPUs, request gpus in the select directive, and submit your job to defaultQ:

#PBS -l select=1:ncpus=1:ngpus=1

ngpus should be 1 or 2 per chunk as GPU nodes have a maximum of 2 GPUs.

Note

The fair share weighting of this queue is 5 times higher than defaultQ.

Queue resource limits

Queue Max Walltime Max Cores Per Job Max Cores per User Memory per node (GB) Memory per core (GB) Fair Share Weight
Small 1 day 24 96 < 123 < 20 10
Normal 7 days 96 96 < 123 < 20 10
Large 21 days 288 288 < 123 < 20 10
High Memory 21 days 192 192 123 to 6144 > 20 50
GPU 7 days 120 120 < 123 N/A 50
small-express 12 hours 4 40 < 123 N/A 50
scavenger 2 days 288 288 < 123 N/A 0
dtq 10 days 2 16 < 16 N/A 0
Interactive 4 hours 4 4 < 123 N/A 100
  • The Small, Normal, Large, High Memory and GPU queues are all accessed via defaultQ. You cannot directly request these queues.
  • The interactive queue is requested using the qsub -I command via the command line. You cannot request interactive access with #PBS -q interactive.
  • The maximum number of jobs a user can have queued is 200 in defaultQ, and 10 in small-express.
  • The maximum number of cores one user can simultaneously use is 600.
  • Array jobs are limited to 1000 elements.
  • N/A = Not Applicable.

Strategic Allocations

Some researchers have access to dedicated compute resources on Artemis. Only members of the Chief Investigator’s projects may submit jobs to these queues. Scavenger jobs will run on idle resources in these queues.

Queue Chief Investigator
alloc-dh Professor David Hensher
alloc-jr Professor John Rasko
alloc-nw Dr. Nicholas Williamson
alloc-am Dr. Alejandro Montoya
alloc-md Associate Professor Meredith Jordan

Note

Users with strategic allocations are also free to request their jobs be run in all other publically accessible queues in addition to their strategic allocation.

Condominiums

Some researchers have access to condominium compute resources. Only projects nominated by the relevant condominium owner may submit jobs to these queue. Scavenger jobs do not run in these queues.

Queue Condominium Owner
condo-civil Associate Professor Luming Shen
alloc-fs Dr. Fatemeh Salehi
alloc-op Professor Olivier Piguet

Note

Users with access to condominiums are also free to request their jobs be run in all other publically accessible queues in addition to their condominium.