Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Note

As Snellius is a new system and was newly installed and configured some things might not be fully working yet and are still in the process of being set up. Here, we list the issues that are known to us and that you don't have to report to the Service Desk. Of course, if you encounter issues on Snellius not listed here then please let us know (through the Service Desk).

Table of Contents

Resolved issues (updated 15-02-22 17:45)

  • Project space data should be complete and writeable as of October 20, 10:30. Please check your data.
  • The remote visualization stack is available as of November 22nd, 10:13. The relevant module is called remotevis/git on both Snellius and Lisa, containing the vnc_desktop script to launch a remote VNC desktop.

  • Accounting was not active in October, as planned. Since then it is fully active

  • The Intel toolchain from 2021 is now available on the GPU nodes
  • The out-of-memory issues with the foss-versions of QuantumESPRESSO, VASP and CP2K have been resolved


Hardware

Infiniband and file system performance

We found that the infiniband connections are not always stable, and may be underperforming. Similarly, the GPFS file systems (home, project and scratch) are not performing as expected. Reading/writing large files performs as expected, but reading/writing many small files is slower than expected. Among other things, this can affect the time it takes to start a binary, run commands, etc.

We are looking into both issues. 

Software

Missing software packages

A number of software packages of the 2021 environment and system tools are still being installed:

  • BLIS-3.0 (missing on GPU nodes)
  • Extrae-3.8.3 (missing on GPU nodes)
  • COMSOL-5.6.0.341
  • Darshan-3.3.1
  • DFlowFM
  • EIGENSOFT-7.2.1
  • FSL-6.0.5.1LAMMPS-29Oct2020
  • swak4foam
  • Trilinos-13.0.1
  • nedit

Using NCCL for GPU <=> GPU communication

NCCL is a communication library that offers optimized primitives for inter-GPU communication. We have found that it often hangs during initialization on Snellius. Probability of a hang during init increases with the amount of GPUs in the allocation. The issue is that NCCL sets up its communication using an ethernet-based network interface. By default, it selects the 'ib-bond0' interface, which supports IP over the infiniband network in Snellius. This interface seems to be experiencing issues however.

As a workaround, you can configure NCCL to use the traditional ethernet interface, which on Snellius GPU nodes is called 'eno1np0', by exporting the following environment variable

Code Block
export NCCL_SOCKET_IFNAME=eno1np0

Note that if you use mpirun as launcher, you should make sure that it gets exported to the other nodes in the job too 

Code Block
mpirun -x NCCL_SOCKET_IFNAME <my_executable_using_nccl>

(note that when launching your parallel application with srun, your environment gets exported automatically, so the 2nd step is not needed).

Impact on performance of this workaround is expected to be minimal: the traditional ethernet interface is only used to initialize  the connection. Any further NCCL communication between nodes is performed using native infiniband.

Cartopy: ibv_fork_init() warning

Users can encounter the following warning message, when import "cartopy" and "netCDF" modules in Python:

Code Block
languagepy
>>> import netCDF4 as nc4
>>> import cartopy.crs as ccrs
[1637231606.273759] [tcn1:3884074:0]          ib_md.c:1161 UCX  WARN  IB: ibv_fork_init() was disabled or failed, yet a fork() has been issued.
[1637231606.273775] [tcn1:3884074:0]          ib_md.c:1162 UCX  WARN  IB: data corruption might occur when using registered memory.

The issue is similar to the one reported here. The warning will disappear if "cartopy" is imported before "netCDF".

Another solution is to disable OFI before running the python script:

Code Block
languagebash
$ export OMPI_MCA_btl='^ofi'
$ export OMPI_MCA_mtl='^ofi'

Tooling

Attaching to a process with GDB can fail

When using gdb -p <pid> (or the equivalent attach <pid>  command in gdb) to attach to a process running in a SLURM job, you might encounter errors or warnings related to executable and library files than cannot be opened:

Code Block
languagebash
snellius paulm@gcn13 09:44 ~$ gdb /usr/bin/sleep -p 1054730
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
...
Reading symbols from /usr/bin/sleep...Reading symbols from .gnu_debugdata for /usr/bin/sleep...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Attaching to program: /usr/bin/sleep, process 1054730
Error while mapping shared library sections:
Could not open `target:/lib64/libc.so.6' as an executable file: Operation not permitted
Error while mapping shared library sections:
Could not open `target:/lib64/ld-linux-x86-64.so.2' as an executable file: Operation not permitted

Such issues will also prevent symbols from being resolved correctly, making debugging really difficult.

The reason that this happens is that processes in a SLURM job get a slightly different view of file system mounts (using a so-called namespace). When you want to attach GDB to a running process and use SSH to log into the node where the process is running, the gdb  process will not be in the same namespace, causing GDB to have issues to directly access the binary (and its libraries) you're trying to debug.

The workaround is to use a slightly different method for attaching to the process:

  1. $ gdb <executable> 
  2. (gdb) set sysroot / 
  3. (gdb) attach <pid> 

For the example above, to attach to /usr/bin/sleep  (PID 1054730) the steps would become:

Code Block
languagebash
# Specify the binary to attach to, so GDB can resolve its symbols
snellius paulm@gcn13 09:50 ~$ gdb /usr/bin/sleep 
GNU gdb (GDB) Red Hat Enterprise Linux 8.2-15.el8
...
Reading symbols from /usr/bin/sleep...Reading symbols from .gnu_debugdata for /usr/bin/sleep...(no debugging symbols found)...done.
(no debugging symbols found)...done.
Missing separate debuginfos, use: yum debuginfo-install coreutils-8.30-8.el8.x86_64

# Tell GDB to assume all files are available under /
(gdb) set sysroot /

# Attach to the running process
(gdb) attach 1055415
Attaching to program: /usr/bin/sleep, process 1055415
Reading symbols from /lib64/libc.so.6...(no debugging symbols found)...done.
Reading symbols from /lib64/ld-linux-x86-64.so.2...(no debugging symbols found)...done.
0x0000153fd299ad68 in nanosleep () from /lib64/libc.so.6

(gdb) bt
#0  0x0000153fd299ad68 in nanosleep () from /lib64/libc.so.6
#1  0x000055e495e8cb17 in rpl_nanosleep ()
#2  0x000055e495e8c8f0 in xnanosleep ()
#3  0x000055e495e89a58 in main ()

(gdb) 

File systems

Project space quota exceeded on Snellius

During the data-migration of the project spaces contents from Cartesius to Snellius, there was a considerable time (weeks) in which there was no freeze on the (Cartesius) source side, in which users kept working, causing files to be created, and sometimes also deleted. During the resync runs, the migration software has kept things in sync  by "soft deleting" files that were migrated but at a later stage appeared to be no longer present on the Catesius side. That is: rather than actually removing them, the AFM software moved them into a .ptrash directory.

Depending on the productivity and rate of data turnover of users in the last weeks of Cartesius, this can add up and cause "quota pressure" for users. The .ptrash directories are owned by root. Snellius system administration has to delete them, users cannot do it themselves. Several quota-related service desk tickets may have this as their root cause - but only for project spaces. Home directories were migrated in a different manner and do not have this issue.

Batch system

Allocating multiple GPU nodes

Normally, batch scripts like

Code Block
#!/bin/bash
#SBATCH -p gpu
#SBATCH -n 8
#SBATCH --ntasks-per-node=4
#SBATCH --gpus=8
#SBATCH -t 20:00
#SBATCH --exclusive

module load ...

srun <my_executable>

Should get you an allocation with 2 GPU nodes, 8 gpus, and 4 MPI tasks per node. However, right now, there is an issue related to specifying an amount of GPUs larger than 4: jobs with the above SBATCH arguments that use OpenMPI and call srun or mpirun will hang.

Instead of specifying the total number of GPUs, please specify the number of GPUs per node, combined with the number of nodes instead. E.g.

Code Block
#!/bin/bash
#SBATCH -p gpu
#SBATCH -N 2
#SBATCH --ntasks-per-node=4
#SBATCH --gpus-per-node=4
#SBATCH -t 20:00
#SBATCH --exclusive

module load ...

srun <my_executable>

This will give you the desired allocation with a total of 2 GPU nodes, 8 gpus, and 4 MPI tasks per node, and the srun (or mpirun) will not hang.