Lisa Computing cluster no longer available

The Lisa Computing Cluster has moved to a computing cluster on Snellius from 1 July 2023. As a result, requesting computing time on Lisa is no longer possible. As a replacement, you can use calculation time on Snellius. 

Requesting this service is done via our helpdesk service desk portal, by phone on +31 20 800 14 00 or email at servicedesk@surf.nl or make an appointment with one of our advisors.

Please note! The product information below is not up to date. We are working on an update.

Lisa end-of-life

After almost two decades of fruitful production, the Lisa cluster has been decommissioned. A detailed timetable is provided below, and we will continuously update it as the migration planning is refined.

Although the Lisa hardware is being decommissioned, we will continue to provide Lisa services on the Snellius infrastructure. We aim to migrate most Lisa users to Snellius and provide as much of a similar work environment as possible.  

Accounts of most (but not all) existing Lisa users will be migrated to the Dutch National supercomputer Snellius, also hosted and maintained by SURF.

Please note that existing Lisa users will be responsible for migrating their data (and any ACLs) to the new Snellius account.

This page contains important information about the migration for existing Lisa users, please read it carefully. Also, check back regularly for updated information.

Last updated  

Migration process

Which users?

In Q4 2022, only GPU node users from specific institutes have been migrated. The reason for this was that these institutes have co-invested in Snellius GPU nodes to provide continued GPU access to their users. As part of this investment, 36 extra GPU nodes were added to Snellius to accommodate the additional users coming from Lisa.

In early 2023 21 extra CPU nodes were added to Snellius to accommodate the additional CPU users coming from Lisa.

All contracts for Lisa's resources that originally ended on 31-12-2022 have been extended until 30-6-2023 to create additional time for new contract negotiations and further migration.

The timetable below shows the current status for all GPU users and the planning for CPU users.

User affiliationImpact on existing Lisa users
Netherlands Cancer Institute (GPU)

Contract negotiations are still underway.

Will be migrated in 2023.

University of Amsterdam (GPU)All active GPU users have been migrated to Snellius.
Eindhoven University of Technology (GPU)All active GPU users after September 2022(?) have been migrated to Snellius.
Donders Institute (GPU)All users have been migrated to Snellius.
Other institutes (GPU)

GPU users will NOT be migrated. These users of the GPU nodes will continue to have access to the Lisa GPU nodes until the end of their project/account. These users have to make their accommodations for an alternative to Lisa.

NWO (CPU)Users of all NWO-big and EINF grants still active on Lisa with an expiration date before the end of June will not be migrated to Snellius. No extensions will be granted to these projects.
If you want to continue your research project, you have to apply for a new grant on Snellius and request that your current Lisa username is activated on Snellius and migrated to the new Snellius account.
University of Amsterdam (CPU)Users will be migrated in two rounds, starting in May 2023. Preparations, like username creation on Snellius, are underway. Only users that showed batch activity on Lisa in 2023 will be migrated by default. In principle, all these users will be migrated on the 22nd of May. If this causes any inconvenience because you want to finish some urgent work or have pending deadlines, you can opt-out of this migration round. All remaining users will be migrated on the 15th of June.

Free University (CPU)

Contract negotiations are still underway.

There is a valid contract with limited resources for Snellius access until 30-6-2023. Please get in touch with us if you want to migrate to Snellius before the big move.

Eindhoven University of Technology (CPU)All active users have already been migrated to Snellius. All of Lisa's budgets expired on the 31st of March. If your account hasn't been migrated because you didn't employ any recent activities on Lisa, but you still want to continue working on Snellius, you have to contact the TU/e support team (hpcsupport@tue.nl) to apply for a Snellius budget.
Utrecht University (CPU)Active users have already been migrated to Snellius. If your account hasn't been migrated because you didn't employ any recent activities on Lisa, but you still want to continue working on Snellius, you have to contact the UU support team to apply for a Snellius budget.
Erasmus University Rotterdam (CPU)

Contract negotiations are still underway.

There is a valid contract with limited resources for Snellius access until 31-12-2023. Please get in touch with us if you want to migrate to Snellius before the big move.

Delft University of Technology (CPU)Delft University has several small department-oriented contracts for Lisa's resources. SURF won't migrate users on these contracts. They have to make their own (department) arrangements to use Snellius resources.
 GCC and PGC A joint PGC  and SURF staff team is currently testing the data setup and customised environment for these users. Because of the unique data setup, all users have to be moved simultaneously. The joint team will determine the date for this.

Note that no new RCCS, EINF and NWO projects for Lisa's CPU or GPU nodes will be granted for 2023.

Steps

  1. SURF will create an account on Snellius for you and provide you access details by e-mail.

  2. Register your IP address with our Service Desk if needed as Snellius uses IP allow-listing. This means that you cannot log in on Snellius through SSH unless your IP is registered. By default, we already allow specific IP ranges, so please try to log in first before contacting the Service Desk. See this FAQ entry for more information.

  3. Once you have access to Snellius, you won't have access to Lisa anymore. You will then have to adapt your personal configuration to the Snellius environment (.bashrc, scratch locations, etc.).

    Don't copy your .bashrc and/or .bash_profile from your Lisa directory to your Snellius home directory without checking and updating them where needed.

  4. Logged in on Snellius, you can migrate your Lisa data to your new personal directory. The Lisa file systems will be mounted on Snellius as a read-only mount until 30-9-2023. You can copy your Lisa data to your Snellius home directory or project space. See below for details.

  5. Check and adapt your batch scripts to the Snellius Slurm configuration (mainly the partition names and the cores-per-node parameters).

  6. User-defined ACLs need to be reconfigured on Snellius, if necessary.

  7. Once your Lisa account expires, the data on Lisa will be deleted after 15 weeks (in accordance with our data retention policy).

Timetable

This is a provisional timeline that will be updated as the migration planning is finalised. Please note that users of the Lisa GPU partitions are migrated first, followed by those of the Lisa CPU partitions.

DateActivityStatus
Q4 2022Migration of Lisa GPU users to Snellius.Done.
November 1 2022Installation of 18 new Snellius GPU-nodes.In production.
November 2022Lisa GPU users migrate to Snellius (initial pilot group).First users have access.
December 2022Lisa GPU users migrate to Snellius (all remaining users).Done.
December 1 2022Installation of 18 new Snellius GPU-nodes.In production before 31-12-2022.
December 31 2022
Decommissioning of Lisa 1080Ti GPU nodes.Decommissioned.
December 31 2022
No more new users will be admitted to Lisa.
First half 2023Migration of Lisa CPU-users to Snellius.Migration is in progress.
March 2023Installation of  additional 21 CPU-thin nodes on Snellius.18 nodes in production on 1-3-2023;
3 nodes in production on 1-4-2023
22 May 2023Migration of first batch of UvA users to Snellius.Done.
15 June 2023Migration of remaining UvA users to Snellius.Done.
July 2023Lisa will be decommissioned.Done.
October 2023Lisa file systems will be unmounted from Snellius.Done.

Transferring your data from Lisa to Snellius


Transferring your Lisa data to Snellius will be your responsibility.

To facilitate data migration, the home filesystems of Lisa will be mounted on Snellius in "read-only" mode. So, once you have access to Snellius, you can copy the files you want to retain from the Lisa read-only mount (/lisa_migration/home/$USER) to your regular Snellius home directory. For the copying, you can use the usual Linux commands (like cp or rsync). This is also a excellent opportunity to clean up your home directory to reduce the amount of storage used.

A number of Lisa users also had access to a project space on Lisa. These project spaces are also mounted (read-only) on Snellius in the location /lisa_migration/project/<project-space-name>.

UvA users who have project spaces on Lisa  up to 1 TiB will automatically receive a project space on Snellius. For project spaces larger than 1 TiB, you have to fill in the usual request form (https://servicedesk.surf.nl/jira/plugins/servlet/desk/portal/13/create/51). Users of other afiliations need to use the form or contact their respective IT departments. On Snellius, the /home directory quota is fixed at 200 GB. Users with an extended /home directory on Lisa must also request a project space on Snellius for their excess data.

Any ACLs you had set on Lisa's data will need to be re-created on Snellius.

Frequently Asked Questions

FAQ - Migration of Lisa to Snellius

 Will the migration from Lisa to Snellius have any impact on my work?

We aim to keep operational impact to a minimum. Our advisors will be in touch to review and discuss. The user environment on Snellius is very similar to that on Lisa. In most cases, your existing workflow on Lisa used for creating and running batch jobs can be applied directly on Snellius, perhaps with some minor modifications. We do request existing Lisa users to migrate their own data (and any ACLs) to their new Snellius account, as mentioned above.

Will I need to go elsewhere for computing services? 

Although the Lisa hardware is decommissioned, we will continue to provide Lisa services on the Snellius infrastructure. Our aim is to migrate most Lisa users to Snellius and provide as much of a similar work environment as possible. So there is no need to look for alternatives.

Will I receive a new password or login for Snellius? 

No. We will just grant you access to one system and rescind access to the Lisa cluster.

How long will data be kept on Lisa? I.e. how long do I have to migrate?

This depends on the specific contract that is in place for your institute or Lisa project, but Lisa data is available until at least three months after migration of an account. Normal procedures for removing user data after an account has expired remain in place (see: this FAQ entry) and users will receive a reminder prior to their data expiring.

I am a VU / UvA user with an unlimited CPU account. Will I Have unlimited budget on Snellius too?

No. On Snellius all budgets are limited. Users on an unlimited fair-share account will get a default computing budget and eventual extra storage as decided by their respective IT departments.

What about users that have both CPU and GPU access to Lisa?

In most cases, users that have both CPU and GPU access on Lisa will have different accounts and usernames for each, because the accounting of CPU and GPU jobs was done differently on Lisa.  As an account on Snellius can have both, GPU and several types of CPU (access to thin, fat and himem nodes) we will consolidate both accounts and users into one. Users with multiple Lisa accounts will get access to all their Lisa data for all their accounts.

Will my existing data on the Lisa be migrated?

Not directly. Users will get a new, empty, home directory on Snellius. Their old Lisa home directories and project spaces will be mounted on a special filesystem as read-only exports. The users are required to copy the data that they need to their Snellius home directories or project spaces. The Lisa data can be found in the directories

/lisa_migration/home/<your_user> and /lisa_migration/project/<your_projectspace>

Will I still be able to Log in to Lisa after the migration?


No. Your login will be enabled for Snellius only, and you will lose SSH access to Lisa. As all your data will be accessible from Snellius, there is no need for that either.

Will my existing jobs in the Lisa queue be migrated?

No, jobs will not be migrated. You have to resubmit those jobs on Snellius.

I have an UvA / VU account: Will I still have unlimited budget on Snellius

No. Both universities will now grant a limited budget. In most cases, we have agreed a standard budget with the IT departments of the universities. In other cases, the users can use this form to request extra resources.

My SSH access to Snellius times out, although I have a valid account and username, why?

Access to Snellius is only allowed from systems or networks of which the IP-address is registered by the Snellius team. When trying to log in from your home, you can either use your university's VPN, which usually is EduVPN or you can use the doornode host, which is universally available. The latter does however not allow data transfers, like, for instance, SFTP or SCP.

As a last resort, please submit a Service Desk ticket, with a request to register your public  IP address. As this requires a change to the firewall settings, the addition of a new IP may take up to two weeks.

In order to know what your public IP address is, you can use the following code on a terminal on Lisa, Snellius or MobaXterm, or just paste the URL into your browser:

curl http://ipecho.net/plain; echo

Why do I have to manually migrate my data from Lisa to Snellius?

Migrating all user data to Snellius by SURF would be a complex operation, that also would involve a very large peak in I/O operations on Snellius. Because SURF cannot make a selection of relevant files, it would mean that we would have to migrate ALL files. Leaving it to the user, will result in an operation that is more spread out over time, causing less load on both system (which are in full production). This is also a good opportunity for users to clean up files they no longer want to keep, reducing the overall data volume to migrate.

As a GPU user, will I have access to Snellius CPU nodes by default?

Not necessarily. Unlike on Lisa, where all users could access (some) CPU nodes, on Snellius, migrated users may eventually only be able to access the GPU nodes.  We have agreements with some institutes to provide CPU access with a default amount of SBU, together with the GPU. Others only get a GPU-budget by default. Feel free to ask the servicedesk if in doubt.

Why do some GPU partitions not exist on Snellius compared to Lisa?

See the system comparison of Lisa and Snellius below for the naming, usage and access restrictions of Snellius' GPU partitions.

Snellius compared to Lisa

Environment

The user environment on Snellius is very similar to that on Lisa. Both use the Linux operating system, and provide very similar software stacks that can be accessed via the modules environment (see Software).  Slurm is used on both systems to manage the batch queues (see SLURM batch system). So in most cases your existing workflow on Lisa used for creating and running batch jobs can be applied directly on Snellius. Minor changes to job scripts might be needed, for example, due to the different set of Slurm partitions on Snellius.

Concurrent with the migration, a new set of GPU nodes is added to Snellius, called Phase1A. See below for more details. One difference between these new nodes compared to the existing GPU nodes in Snellius is that they contain a local SSD disk to be used as fast scratch space. If you want to use the new GPU nodes with the local scratch disk feature, you have to specify this with the --constraint=scratch-node option for Slurm.

On Snellius, you can check your disk quota details with the myquota command.

Phase1A GPU extension of Snellius

The UvA and the VU, the most important stakeholders in the GPU partition of Lisa, are partaking in Snellius. As a result, the GPU capacity of Snellius will be increased considerably in November-December 2022 (the Phase1A and Phase1B extensions). This is an upgrade with nodes that are almost identical to the existing Phase1 GPU nodes. The only difference is the local SSD disks on the additional GPU nodes which can be used as a fast local scratch space. The new GPU nodes will be integrated into the existing Slurm partitions on Snellius, and they can be used by every Snellius user that has access to the GPU partition.

Here is a comparison between the current Phase 1 GPU nodes and the new Phase 1A extension:


Phase 1 (existing)

Phase 1A+1B extension

Number of nodes

36

36

Node flavour

gcn

gcn

CPU SKU

Intel Xeon Platinum 8360Y (2x)
36 Cores/Socket
2.4 GHz (Speed Select SKU)
250W

Intel Xeon Platinum 8360Y (2x)
36 Cores/Socket
2.4 GHz (Speed Select SKU)
250W

CPU cores per node

72

72

Accelerators

NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU

NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU

DIMMs

16 x 32 GiB, 3200 MHz, DDR4
512 GiB, 160 GiB HMB2 (7,11 GiB)

16 x 32 GiB, 3200 MHz, DDR4
512 GiB, 160 GiB HMB2 (7,11 GiB)

Scratch space

  • /scratch-shared
  • /scratch-local


  • /scratch-shared
  • /scratch-local
  • /scratch-node (local disk)
Local diskn/a
  • ThinkSystem PM983 2.5" 7mm 7.68TB
  • Read Intensive Entry NVMe PCIe 3.0 x4
  • Trayless SSD

You can select a node with local SSD storage using a SLURM constraint:
#SBATCH --constraint=scratch-node

System comparison between Lisa and Snellius


Lisa

Snellius

Operating System

Debian Linux

Rocky Linux

Batch system

Slurm

Slurm

Quota home filesystem

200 GB (NFS)

200 GB (GPFS)

Quota inodes home filesystem

N/A

1M inodes (files)

Quota scratch filesystem


8 TB (the local SSD scratch disks are excluded from this limit)

Access control lists (ACLs)NFSv4POSIX ACLs

Type of GPUs

  • 4 x GeForce 1080Ti (11GB GDDR5X) 
  • 4 x Titan RTX (24 GB GDDR6)
  • 4 x NVIDIA A100 (40 GiB HMB2)
Cost of GPU nodes
  • 4 x GeForce 1080Ti (11GB GDDR5X):  42.1 SBU/node-hour
  • 4 x Titan RTX (24 GB GDDR6): 91.2 SBU/node-hour
  • 4 x NVIDIA A100 (40 GiB HMB2) : 512 SBU/node-hour

GPU partitions

  • gpu
  • gpu_shared
  • gpu_titanrtx
  • gpu_titanrtx_shared
  • gpu
  • gpu_vis


(All GPUs in Snellius are of the same type. Hence there are no separate partitions per type)

Software Environment

2022 / 2021 / 2020
2019 (deprecated)

2022 / 2021
2020 (development environment only; no applications)

Project space location/project/<projectname>/projects/0/<projectname>
Managed_dataset location/nfs/managed_datasets//projects/2/managed_datasets/

Install base software

EasyBuild + Modules env

EasyBuild + Modules env

Accounting

The accounts of  UvA and VU users have no individual budgets,

and no budget checks are applied

All accounts have an initial budget. 

Budget checks will prevent that user jobs will exceed the allocated budget.

Access to the System
Only accessible from registered IP-addresses.

GPU partitions on Snellius

For regular users, only 2 GPU partitions are visible on Snellius:

PartitionUsageUser visible?User accessible?Limits
gpu GPU computationsYesYesMax. 5 days wallclock time
gpu_vis(Interactive) remote visualisationYesRestricted access by defaultMax 24 hours wallclock time + limit on number of jobs per user

See Snellius partitions and accounting for all details on these partitions, including how they are accounted.

Getting access to Snellius

For users with an account and username on Lisa, SURF will create the same username/account on Snellius. Also, the Unix uid of your username will remain the same, so you can still access your Lisa files on Snellius.

Users who already have an account on Snellius will be given the choice to get an additional username/account on Snellius or to use the existing username/account.

Users who have yet to use Lisa can request access to Snellius in the same way used for Lisa in the past (see Obtaining an account on Snellius).