The configuration Lisa allows users to submit to both "shared" partitions, where they are able to use a subset of the resources of a full node, and "exclusive" partitions, where the full node is allocated for the job. This page explains the partitions available to users, the accounting of using each. Please refer to the user guide for a general introduction on the partition usage and how to submit a job using SLURM.

Lisa partitions

In order to select different hardware where to run, compute nodes are grouped into partitions. Each partition includes a subset of nodes, with a specific maximum wall time and different type of hardware. The table below summarise the partitions available on Lisa which are accessible to users by selecting it with the SLURM option:

#SBATCH --partition=<partition name>

Because of the heterogeneous hardware on the Lisa system, partitions may contain different node types. A specific node type can be requested, within a partition, using the SBATCH options  --constraint or --gres. The "Available node features" for each of the partitions on Lisa are listed in the table below.

Partition nameNode type(s)Available node featuresSmallest possible allocationMax wall timeNotes 
sharedsilver_4110

--constraint=silver_4110

1 core120 h (5 days)

gold_6130--constraint=gold_6130



gold_6230R--partition=shared_52c_384g


normalsilver_4110--constraint=silver_41101 node120 h (5 days)

gold_6130--constraint=gold_6130


fatgold_6126
1 node24 h (1 day)
gpubronze_3104 + GeForce 1080Ti


1 node120 h (5 days)
gpu_sharedbronze_3104  + GeForce 1080Ti
1/4 of the node, 1 GPU120 h (5 days)
gpu_titanrtxgold_5118 + TitanRTX
1 node120 h (5 days)
gpu_titanrtx_sharedgold_5118 + TitanRTX
1/4 of the node, 1 GPU120 h (5 days)
gpu_shared_coursebronze_3104  + GeForce 1080Ti

48 h (2 days)Only available during courses
gpu_titanrtx_shared_coursegold_5118 + TitanRTX

48 h (2 days)Only available during courses
gpu_shared_educationbronze_3104  + GeForce 1080Ti

48 h (2 days)Only available during courses
fat_soil_sharedgold_6230 + Titan RTX
1/4 of the node, 1 GPU120 h (5 days)

Contact the service desk to request access to this partition.

gpu_shared_jupyterbronze_3104  + GeForce 1080Ti

2 hAccess available only through the Jupyter hub.
shared_jupytersilver_4110

2 hAccess available only through the Jupyter hub.

Accounting

Each account on Lisa has an associated budget. An account can have one or more logins associated with it. Each login has an associated username and password.

In case there are multiple logins associated with one account, these logins all share the same budget.

When you run a job on Lisa, the cost of this job is deducted from your budget. Depending on your affiliation and type of resources, this budget may be without limit:

  • UvA and VU users: there is no limit on your CPU budget.
  • Other users (NCF, FOM, etc): your CPU budgets have limits.
  • All GPU accounts have budget limits.

The unit of accounting is the SBU, which stands for System Billing Unit. The cost of a job in SBUs is equal to the number of wall clock hours the job actually uses (not the estimated time set in the job script), multiplied by the number of cores that are allocated, and a weight factor. For example, you submitted a job script reserving 6 nodes using 16 cores each, with an estimated wall clock time of 4 hours and a weight of 1. Your job is finished after 2 hours and 30 minutes. The cost of your job will then be

6 x 16 x 2.5 x 1.0 = 240 SBU

Since accounting is done based on the actual run time of a job, and not on the estimated (wall clock) run time you put in the job script, there is no penalty in specifying a liberal wall clock time in your job script. The only downside of the latter is that that your job may start later, because it is more difficult for the scheduler to find a slot when to execute the job.


The table below shows the "SBU pricing" of core hours for the various node types. 

Node type # cores per nodeAvailable memory per nodeSBUs per 1 hour, full node
bronze_3104 + GeForce 1080Ti12251 GiB 42.1
gold_5118 + Titan RTX24187 GiB 91.2
gold_61301691 GiB 16.0
silver_41101691 GiB16.0
gold_6126482 TiB42.1
gold_6230R52293.5 GiB52.0


Shared node accounting

For partitions that allow shared usage (e.g. "shared"), the accounting of the budget is based on the largest resource (cpus/memory/GPUs) that is reserved for the job. This means, for example, that if a job requests half of the memory of a node and a single core, the accounting system will budget the job for half node utilisation. Similarly this applies to the GPU nodes, where requesting half of the GPUs available on the node will budget half of the node SBUs for the duration of the job (see table above for SBUs cost per node).


Costs of inefficient use

You will be charged for all cores in the node(s) that you reserved, regardless of the actual number of cores used by the job/application. So if your application only uses a few (or even one) of the CPU cores of a node, then it makes sense to write a job script that runs multiple instances of this application in parallel, in order to fully utilize the reserved resources.

Getting account and budget information

You can view your account details using

accinfo

This shows information such as the e-mail associated with the account, the initial and remaining budget, and until when the account is valid.

An overview of the SBU consumption can be obtained with

accuse

By default, consumption is shown for the current login, per month, over the last year. Per day usage can be obtained by adding the -d flag. The start and end of the period shown in the overview can be changed with the -s DD-MM-YYYY and -e DD-MM-YYYY flags, respectively. Finally, consumption for a specific account or login can be obtained using -a accountname and -u username, respectively.