Lisa end-of-life

After almost two decades of very fruitful production, the Lisa cluster will be decommissioned. In the coming months, we will first start with phasing out the GPU nodes of Lisa. We expect the CPU nodes to follow in first half of 2023. A more detailed time-table is provided below, and will be continuously updated as the migration planning is refined.

Although the Lisa hardware is being decommissioned, we will continue to provide Lisa services on the Snellius infrastructure. Our aim is to migrate most Lisa users to Snellius and provide as much a similar work environment as possible. 

Accounts of most (but not all) existing Lisa users will be migrated to the Dutch National supercomputer Snellius, also hosted and maintained by SURF.

Please note that existing Lisa users will be responsible themselves for migrating their data (and any ACLs) to the new Snellius account.

This page contains important information about the migration for existing Lisa users, please read carefully. Also, check back regularly for updated information.

Last updated 22-12-2022, 12:13

Migration process

Which users?

In the current migration phase (Q4 2022) only GPU node users from specific institutes will be migrated. The reason for this is that these institutes have co-invested in Snellius GPU nodes, in order to provide continued GPU access to their users. As part of this investment 36 extra GPU nodes are added to Snellius, to accommodate for the extra users coming from Lisa.

The timetable below specifies the impact on GPU users (planning for CPU users follows in 2023):

User affiliationImpact on existing Lisa GPU users
Netherlands Cancer Institute (GPU)Will be migrated 2023
University of Amsterdam (GPU)Migration already underway, will be finished before end of 2022. Only GPU users that were active in the past 4 months.
Eindhoven University of Technology (GPU)Will be migrated 2023
Donders Institute (GPU)Will be migrated 2023
Other institutes (GPU)

GPU users will NOT be migrated. These users of the GPU nodes will continue to have access to the Lisa GPU nodes, until the end of their project/account. These users have to make their own accommodations for an alternative to Lisa.

CPU usersMigration planning will follow in 2023

Note that no new RCCS, EINF and NWO projects for Lisa CPU or GPU nodes will be granted for 2023.

Steps

  1. SURF will create an account on Snellius for you, and provide you with access details by e-mail
  2. If needed register your IP address with our Service Desk, as Snellius uses IP allow-listing. This means that unless your IP is registered you will not be able to log in on Snellius through SSH. By default we already allow certain IP ranges, so please try to log in first before contacting the Service Desk. See this FAQ entry for more information.
  3. Once you have access to Snellius, you won't have access to Lisa anymore. You will then have to adapt your personal configuration to the Snellius environment (.bashrc, scratch locations, etc.).

    Don't copy your .bashrc and/or.bash_profile from your Lisa directory to your Snellius home directory, without checking them and updating where needed.

  4. Logged in on Snellius, you can migrate your Lisa data to your new personal directory. The Lisa file systems will be mounted on Snellius as a read-only mount until 31-3-2023. You can copy your Lisa data to your Snellius homedir or project space. See below for details.
  5. Check and adapt your batch scripts to the Snellius Slurm configuration (mainly the partition names and the cores-per-node parameters).
  6. User-defined ACLs need to be reconfigured on Snellius, if necessary
  7. Once your Lisa account expires the data on Lisa will be deleted after 15 weeks (in accordance with our data retention policy)

Time table

This is a provisional timeline, that will be updated as the migration planning in finalized. Please note that users of the Lisa GPU partitions are migrated first, followed by the users of the Lisa CPU partitions.

DateActivityStatus
Q4 2022Migration of Lisa GPU-users to Snellius.In progress
November 1Installation of 18 new Snellius GPU-nodesIn production
NovemberLisa GPU users migrate to Snellius (initial pilot group)First users have access
DecemberLisa GPU users migrate to Snellius (all remaining users)In progress
December 1Installation of 18 new Snellius GPU-nodes In production before 31-12-2022
First half 2023Migration of Lisa CPU-users to Snellius

Transferring your data from Lisa to Snellius


Transferring your Lisa data to Snellius will be your own responsibility.

To facilitate date migration the home filesystems of Lisa will be mounted on Snellius in "read-only" mode. So, once you have access to Snellius, you can copy the files that you want to retain from the Lisa read-only mount (/lisa_migration/home/$USER) to your regular Snellius homedir. For the copying you can use the usual Linux commands (like cp or rsync). This is also a good opportunity to clean up your home directory, to reduce the amount of storage used.

A number of Lisa users also had access to a project space on Lisa. These project spaces are also mounted (read-only) on Snellius in the location /lisa_migration/project/<project-space-name>

Note that we still have to work out an (administrative) solution for requesting project space with read-write access on Snellius. This implies that for the time being you can only use the Lisa project spaces for reading input files and copying to Snellius scratch and use it in your computations. You should copy from the mounted Lisa project directory to /scratch-shared/$USER on Snellius using an interactive node, to prepare job input files. The resulting job output has to be stored elsewhere, for instance on your home directory. Note that the home directory has a quota of 200GB. If you expect to have (much larger) data output sizes then please contact the Service Desk for discussing a (temporary) solution.

Any ACLs you had set on Lisa data will need to be re-created on Snellius.

Frequently Asked Questions

FAQ - Migration of Lisa to Snellius

Will the migration from Lisa to Snellius have any impact on my work?

We aim to keep operational impact to a minimum. Our advisors will be in touch to review and discuss. The user environment on Snellius is very similar to that on Lisa. In most cases your existing workflow on Lisa used for creating and running batch jobs can be applied directly on Snellius, perhaps with some minor modifications. We do request existing Lisa users to migrate their own data (and any ACLs) to their new Snellius account, as mentioned above.

Will I need to go elsewhere for computing services? 

Although the Lisa hardware is decommissioned, we will continue to provide Lisa services on the Snellius infrastructure. Our aim is to migrate most Lisa users to Snellius and provide as much a similar work environment as possible. So there is no need to look for alternatives.

How long will data be kept on Lisa? I.e. how long do I have to migrate?

This depends on the specific contract that is in place for your institute or Lisa project, but Lisa data is available until at least three months after migration of an account. Normal procedures for removing user data after an account has expired remain in place (see: this FAQ entry) and users will receive a reminder prior to their data expiring.

What about users that have both CPU and GPU access to Lisa?

In most cases users that have both CPU and GPU access on Lisa will have different accounts and usernames for each, because the accounting of CPU and GPU jobs was done differently on Lisa. GPU accounts will be migrated first, CPU accounts will be migrated at a later stage.

Will my existing jobs in the Lisa queue be migrated?

No, jobs will not be migrated. You have to resubmit those jobs on Snellius.

My SSH access to Snellius times out, although I have a valid account and username, why?

Access to Snellius is only allowed from systems of which the IP-address is registered by the Snellius team. If SSH access is denied, please submit a Service Desk ticket, with a request to register your public  IP address. Also see this FAQ entry for information to use the "doornode", which you can use for access until your IP has been added to the allow-list.
In order to know what your public IP address is, you can use the following code on a terminal on Lisa, Snellius or MobaXterm, or just paste the URL into your browser:

curl http://ipecho.net/plain; echo

Why do I have to manually migrate my data from Lisa to Snellius?

Migrating all user data to Snellius by SURF would be a complex operation, that also would involve a very large peak in I/O operations on Snellius. Because SURF cannot make a selection of relevant files, it would mean that we would have to migrate ALL files. Leaving it to the user, will result in an operation that is more spread out over time, causing less load on both system (which are in full production). This is also a good opportunity for users to clean up files they no longer want to keep, reducing the overall data volume to migrate.

Will I have access to Snellius CPU nodes by default?

No, this is different from Lisa, where all users could access (some) CPU nodes. On Snellius, migrated users will only be able to access the Snellius GPU nodes initially, except for a selection of institutes that provide CPU node access. The GPU accounts that are migrated, only get a GPU-budget by default.

Why do some GPU partitions not exist on Snellius compared to Lisa?

See the system comparison of Lisa and Snellius below for the naming, usage and access restrictions of Snellius' GPU partitions.

Snellius compared to Lisa

Environment

The user environment on Snellius is very similar to that on Lisa. Both use the Linux operating system, and they both provide very similar software stacks that can be accessed via the modules environment (see Software).  Slurm is used on both systems to manage the batch queues (see SLURM batch system). So in most cases your existing workflow on Lisa used for creating and running batch jobs can be applied directly on Snellius. Minor changes to job scripts might be needed, for example due to the different set of Slurm partitions on Snellius.

Concurrent with the migration a new set of GPU nodes is added to Snellius, called Phase1A. See below for more details. One difference of these new nodes compared to the existing GPU nodes in Snellius is that they contain a local SSD disk, to be used as fast scratch space. If you want to use the new GPU nodes with the local scratch disk feature, you have to specify this with the --constraint=scratch-node option for Slurm.

On Snellius you can check your current disk quota details with the myquota command.

Phase1A GPU extension of Snellius

The UvA and the VU, the most important stakeholders in the GPU partition of Lisa, are partaking in Snellius. As a result, the GPU capacity of Snellius will been increased considerably in November-December 2022 (the Phase1A and Phase1B extensions). This is an upgrade with nodes that are almost identical to the existing Phase1 GPU nodes. The only difference is the local SSD disks on the additional GPU nodes, that can be used as a very fast local scratch space. The new GPU nodes will be integrated into the existing Slurm partitions on Snellius, and they can be used by every Snellius user that has access to the GPU partition.

Here is a comparison between the existing Phase 1 GPU nodes and the new Phase 1A extension:


Phase 1 (existing)

Phase 1A+1B extension

Number of nodes

36

36

Node flavor

gcn

gcn

CPU SKU

Intel Xeon Platinum 8360Y (2x)
36 Cores/Socket
2.4 GHz (Speed Select SKU)
250W

Intel Xeon Platinum 8360Y (2x)
36 Cores/Socket
2.4 GHz (Speed Select SKU)
250W

CPU cores per node

72

72

Accelerators

NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU

NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU

DIMMs

16 x 32 GiB, 3200 MHz, DDR4
512 GiB, 160 GiB HMB2 (7,11 GiB)

16 x 32 GiB, 3200 MHz, DDR4
512 GiB, 160 GiB HMB2 (7,11 GiB)

Scratch space

  • /scratch-shared
  • /scratch-local


  • /scratch-shared
  • /scratch-local
  • /scratch-node (local disk)
Local diskn/a
  • ThinkSystem PM983 2.5" 7mm 7.68TB
  • Read Intensive Entry NVMe PCIe 3.0 x4
  • Trayless SSD

You can select a node with local SSD storage using a SLURM constraint:
#SBATCH --constraint=scratch-node

System comparison between Lisa and Snellius


Lisa

Snellius

Operating System

Debian Linux

Rocky Linux

Batch system

Slurm

Slurm

Quota home filesystem

200 GB (NFS)

200 GB (GPFS)

Quota inodes home filesystem

N/A

1M inodes (files)

Quota scratch filesystem


8 TB (the local SSD scratch disks are excluded from this limit)

Access control lists (ACLs)NFSv4POSIX ACLs

Type of GPUs

  • 4 x GeForce 1080Ti (11GB GDDR5X) 
  • 4 x Titan RTX (24 GB GDDR6)
  • 4 x NVIDIA A100 (40 GiB HMB2)
Cost of GPU nodes
  • 4 x GeForce 1080Ti (11GB GDDR5X):  42.1 SBU/node-hour
  • 4 x Titan RTX (24 GB GDDR6): 91.2 SBU/node-hour
  • 4 x NVIDIA A100 (40 GiB HMB2) : 512 SBU/node-hour

GPU partitions

  • gpu
  • gpu_shared
  • gpu_titanrtx
  • gpu_titanrtx_shared
  • gpu
  • gpu_vis


(All GPUs in Snellius are of the same type, hence there are no separate partitions per type)

Software Environment

2022 / 2021 / 2020
2019 (deprecated)

2022 / 2021
2020 (development environment only; no applications)

Install base software

EasyBuild + Modules env

EasyBuild + Modules env

Access to the System
Only accessible from registered IP-addresses

GPU partitions on Snellius

For regular users only 2 GPU partitions are visible on Snellius:

PartitionUsageUser visible?User accessible?Limits
gpu GPU computationsYesYesMax. 5 days wallclock time
gpu_vis(Interactive) remote visualizationYesRestricted access by defaultMax 24 hours wallclock time + limit on number of jobs per user

See Snellius usage and accounting for all details on these partitions, including how they are accounted.

Getting access to Snellius

For those users who already have an account and username on Lisa, SURF will create the same username/account on Snellius. Also, the Unix uid of your username will remain the same, so you can still access your Lisa files on Snellius.

Users who already have an account on Snellius, will be given the choice to get an additional username/account on Snellius, or to use the existing username/account.

Users who haven't used Lisa yet can request access to Snellius in the same way as for Lisa in the past (see Obtaining an account).