Migration process
Which users?
In Q4 2022, only GPU node users from specific instituteshave been migrated. The reason for this was that these institutes have co-invested in Snellius GPU nodes to provide continued GPU access to their users. As part of this investment, 36 extra GPU nodes were added to Snellius to accommodate the additional users coming from Lisa.
In early 2023 21 extra CPU nodes were added to Snellius to accommodate the additional CPU users coming from Lisa.
All contracts for Lisa's resources that originally ended on 31-12-2022 have been extended until 30-6-2023 to create additional time for new contract negotiations and further migration.
The timetable below shows the current status for all GPU users and the planning for CPU users.
User affiliation | Impact on existing Lisa users |
---|---|
Netherlands Cancer Institute (GPU) | Contract negotiations are still underway. Will be migrated in 2023. |
University of Amsterdam (GPU) | All active GPU users have been migrated to Snellius. |
Eindhoven University of Technology (GPU) | All active GPU users after September 2022(?) have been migrated to Snellius. |
Donders Institute (GPU) | All users have been migrated to Snellius. |
Other institutes (GPU) | GPU users will NOT be migrated. These users of the GPU nodes will continue to have access to the Lisa GPU nodes until the end of their project/account. These users have to make their accommodations for an alternative to Lisa. |
NWO (CPU) | Users of all NWO-big and EINF grants still active on Lisa with an expiration date before the end of June will not be migrated to Snellius. No extensions will be granted to these projects. If you want to continue your research project, you have to apply for a new grant on Snellius and request that your current Lisa username is activated on Snellius and migrated to the new Snellius account. |
University of Amsterdam (CPU) | Users will be migrated in two rounds, starting in May 2023. Preparations, like username creation on Snellius, are underway. Only users that showed batch activity on Lisa in 2023 will be migrated by default. In principle, all these users will be migrated on the 22nd of May. If this causes any inconvenience because you want to finish some urgent work or have pending deadlines, you can opt-out of this migration round. All remaining users will be migrated on the 15th of June. |
Free University (CPU) | Contract negotiations are still underway. There is a valid contract with limited resources for Snellius access until 30-6-2023. Please get in touch with us if you want to migrate to Snellius before the big move. |
Eindhoven University of Technology (CPU) | All active users have already been migrated to Snellius. All of Lisa's budgets expired on the 31st of March. If your account hasn't been migrated because you didn't employ any recent activities on Lisa, but you still want to continue working on Snellius, you have to contact the TU/e support team (hpcsupport@tue.nl) to apply for a Snellius budget. |
Utrecht University (CPU) | Active users have already been migrated to Snellius. If your account hasn't been migrated because you didn't employ any recent activities on Lisa, but you still want to continue working on Snellius, you have to contact the UU support team to apply for a Snellius budget. |
Erasmus University Rotterdam (CPU) | Contract negotiations are still underway. There is a valid contract with limited resources for Snellius access until 3031-612-2023. Please get in touch with us if you want to migrate to Snellius before the big move. |
Delft University of Technology (CPU) | Delft University has several small department-oriented contracts for Lisa's resources. SURF won't migrate users on these contracts. They have to make their own (department) arrangements to use Snellius resources. |
GCC and PGC | A joint PGC and SURF staff team is currently testing the data setup and customised environment for these users. Because of the unique data setup, all users have to be moved simultaneously. The joint team will determine the date for this. |
Info |
---|
Note that no new RCCS, EINF and NWO projects for Lisa's CPU or GPU nodes will be granted for 2023. |
Steps
- SURF will create an account on Snellius for you and provide you access details by e-mail.
- Register your IP address with our Service Desk if needed as Snellius uses IP allow-listing. This means that you cannot log in on Snellius through SSH unless your IP is registered. By default, we already allow specific IP ranges, so please try to log in first before contacting the Service Desk. See this FAQ entry for more information.
Once you have access to Snellius, you won't have access to Lisa anymore. You will then have to adapt your personal configuration to the Snellius environment (
.bashrc
, scratch locations, etc.).Note Don't copy your
.bashrc
and/or.bash_profile
from your Lisa directory to your Snellius home directory without checking and updating them where needed.- Logged in on Snellius, you can migrate your Lisa data to your new personal directory. The Lisa file systems will be mounted on Snellius as a read-only mount until 30-9-2023. You can copy your Lisa data to your Snellius home directory or project space. See below for details.
- Check and adapt your batch scripts to the Snellius Slurm configuration (mainly the partition names and the cores-per-node parameters).
- User-defined ACLs need to be reconfigured on Snellius, if necessary.
- Once your Lisa account expires, the data on Lisa will be deleted after 15 weeks (in accordance with our data retention policy).
Timetable
This is a provisional timeline that will be updated as the migration planning is finalised. Please note that users of the Lisa GPU partitions are migrated first, followed by those of the Lisa CPU partitions.
Date | Activity | Status |
---|---|---|
Q4 2022 | Migration of Lisa GPU users to Snellius. | In progress. |
November 1 | Installation of 18 new Snellius GPU-nodes. | In production. |
November | Lisa GPU users migrate to Snellius (initial pilot group). | First users have access. |
December | Lisa GPU users migrate to Snellius (all remaining users). | In progress. |
December 1 | Installation of 18 new Snellius GPU-nodes. | In production before 31-12-2022. |
December 31 | Decommissioning of Lisa 1080Ti GPU nodes. | Decommissioned. |
December 31 | No more new users will be admitted to Lisa. | |
First half 2023 | Migration of Lisa CPU-users to Snellius. | Migration is in progress. |
March 2023 | Installation of additional 21 CPU-thin nodes on Snellius. | 18 nodes in production on 1-3-2023; 3 nodes in production on 1-4-2023 |
22 May 2023 | Migration of first batch of UvA users to Snellius. | |
15 June 2023 | Migration of remaining UvA users to Snellius. | |
July 2023 | Lisa will be decommissioned. | |
October 2023 | Lisa file systems will be unmounted from Snellius. |
Transferring your data from Lisa to Snellius
Warning |
---|
Transferring your Lisa data to Snellius will be your responsibility. |
To facilitate data migration, the home filesystems of Lisa will be mounted on Snellius in "read-only" mode. So, once you have access to Snellius, you can copy the files you want to retain from the Lisa read-only mount (/lisa_migration/home/$USER
) to your regular Snellius home directory. For the copying, you can use the usual Linux commands (like cp
or rsync
). This is also a excellent opportunity to clean up your home directory to reduce the amount of storage used.
A number of Lisa users also had access to a project space on Lisa. These project spaces are also mounted (read-only) on Snellius in the location /lisa_migration/project/<project-space-name>.
Users who have project spaces on Lisa up to 1 TiB will automatically receive a project space on Snellius. For project spaces larger than 1 TiB, you have to fill in the usual request form (https://servicedesk.surf.nl/jira/plugins/servlet/desk/portal/13/create/51). On Snellius, the /home directory quota is fixed at 200 GB. Users with an extended /home directory on Lisa must also request a project space on Snellius for their excess data.
Any ACLs you had set on Lisa's data will need to be re-created on Snellius.
Frequently Asked Questions
Excerpt Include | ||||
---|---|---|---|---|
|
Snellius compared to Lisa
Environment
The user environment on Snellius is very similar to that on Lisa. Both use the Linux operating system, and provide very similar software stacks that can be accessed via the modules environment (see Software). Slurm is used on both systems to manage the batch queues (see SLURM batch system). So in most cases your existing workflow on Lisa used for creating and running batch jobs can be applied directly on Snellius. Minor changes to job scripts might be needed, for example, due to the different set of Slurm partitions on Snellius.
Concurrent with the migration, a new set of GPU nodes is added to Snellius, called Phase1A. See below for more details. One difference between these new nodes compared to the existing GPU nodes in Snellius is that they contain a local SSD disk to be used as fast scratch space. If you want to use the new GPU nodes with the local scratch disk feature, you have to specify this with the --constraint=scratch-node
option for Slurm.
On Snellius, you can check your disk quota details with the myquota
command.
Phase1A GPU extension of Snellius
The UvA and the VU, the most important stakeholders in the GPU partition of Lisa, are partaking in Snellius. As a result, the GPU capacity of Snellius will be increased considerably in November-December 2022 (the Phase1A and Phase1B extensions). This is an upgrade with nodes that are almost identical to the existing Phase1 GPU nodes. The only difference is the local SSD disks on the additional GPU nodes which can be used as a fast local scratch space. The new GPU nodes will be integrated into the existing Slurm partitions on Snellius, and they can be used by every Snellius user that has access to the GPU partition.
Here is a comparison between the current Phase 1 GPU nodes and the new Phase 1A extension:
Phase 1 (existing) | Phase 1A+1B extension | |
Number of nodes | 36 | 36 |
Node flavour | gcn | gcn |
CPU SKU | Intel Xeon Platinum 8360Y (2x) | Intel Xeon Platinum 8360Y (2x) |
CPU cores per node | 72 | 72 |
Accelerators | NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU | NVIDIA A100 (4x) with 40 GiB HMB2 Memory with 5 active memory stacks per GPU |
DIMMs | 16 x 32 GiB, 3200 MHz, DDR4 | 16 x 32 GiB, 3200 MHz, DDR4 |
Scratch space |
|
|
Local disk | n/a |
You can select a node with local SSD storage using a SLURM constraint: |
System comparison between Lisa and Snellius
Lisa | Snellius | |
Operating System | Debian Linux | Rocky Linux |
Batch system | Slurm | Slurm |
Quota home filesystem | 200 GB (NFS) | 200 GB (GPFS) |
Quota inodes home filesystem | N/A | 1M inodes (files) |
Quota scratch filesystem | 8 TB (the local SSD scratch disks are excluded from this limit) | |
Access control lists (ACLs) | NFSv4 | POSIX ACLs |
Type of GPUs |
|
|
Cost of GPU nodes |
|
|
GPU partitions |
|
|
Software Environment | 2022 / 2021 / 2020 | 2022 / 2021 |
Project space location | /project/<projectname> | /projects/0/<projectname> |
Managed_dataset location | /nfs/managed_datasets/ | /projects/2/managed_datasets/ |
Install base software | EasyBuild + Modules env | EasyBuild + Modules env |
Accounting | The accounts of UvA and VU users have no individual budgets, and no budget checks are applied | All accounts have an initial budget. Budget checks will prevent that user jobs will exceed the allocated budget. |
Access to the System | Only accessible from registered IP-addresses. |
GPU partitions on Snellius
For regular users, only 2 GPU partitions are visible on Snellius:
Partition | Usage | User visible? | User accessible? | Limits |
---|---|---|---|---|
gpu | GPU computations | Yes | Yes | Max. 5 days wallclock time |
gpu_vis | (Interactive) remote visualisation | Yes | Restricted access by default | Max 24 hours wallclock time + limit on number of jobs per user |
See Snellius usage and accounting for all details on these partitions, including how they are accounted.
Getting access to Snellius
For users with an account and username on Lisa, SURF will create the same username/account on Snellius. Also, the Unix uid of your username will remain the same, so you can still access your Lisa files on Snellius.
Users who already have an account on Snellius will be given the choice to get an additional username/account on Snellius or to use the existing username/account.
Users who have yet to use Lisa can request access to Snellius in the same way used for Lisa in the past (see Obtaining an account).