Availability
Snellius
On Snellius only Apptainer is installed. For backward compatibility there is a symbolic link from the former singularity
command to apptainer
.
Versions
Apptainer (and Singularity) is a software system that is in active development. SURF aims to keep the version of Apptainer on our systems at the latest stable release. To keep our documentation up to date we have decided to give some general guidelines and do not provide in-depth generic information, but there are links below to find this information.
We guarantee no backward compatibility when Singularity/Apptainer is upgraded.
Requirements
Using a Singularity/Apptainer container involves two steps:
- Preparing the container image, by downloading or building it.
- Running the container on a host system
In order to build your own container images you will need to have Singularity/Apptainer installed on a Linux system where you have root (i.e. superuser) rights. Note that building the container image therefore isn't possible on our systems, as regular users do not have root access. However, on Snellius the cbuild
partition can be used instead, see below.
Once you have a Singularity/Apptainer container image, you can then run it on Snellius or the Grid system. You will need regular SSH access to one of these systems to do so.
Building the container
Using your local installation of Singularity/Apptainer
You can use Singularity/Apptainer on your local Linux system to build a container image. How you install Singularity/Apptainer locally depends on the Linux distribution you're using. For example, on Fedora you can install Singularity with the command sudo dnf install singularity
, on Arch Linux with sudo pacman -S singularity
(and similar commands for installing Apptainer). Ubuntu does not have Singularity included in the main software repositories. For manual builds of the Singularity or Apptainer software (note: the software, not the container!), you can check the here (Singularity) and here (Apptainer).
You can install Singularity/Apptainer for example on your own machine, or on a virtual environment, either at your local computer, at SURF HPC Cloud, or at another Cloud resource provider. Administrative privileges are necessary to install the software.
Container build nodes (Snellius)
On Snellius we have a Singularity build partition called cbuild
. The nodes in this partition are configured to allow building of Singularity containers without root privileges. There are two ways to build a Singularity container using a cbuild
node, we describe both below.
Using interactive SSH access
Use the salloc
command to reserve a cbuild
node, and then SSH into that node. You can then interactively run the Singularity/Apptainer commands you need to build the container image, specifying the --fakeroot
option
Reserve a node in the cbuild partition
snellius paulm@int3 13:14 ~$ salloc -p cbuild -t 2:00:00 salloc: Pending job allocation 1197343 salloc: job 1197343 queued and waiting for resources <wait for node to become available> salloc: Granted job allocation 1197343 salloc: Waiting for resource configuration salloc: Nodes srv2 are ready for job
SSH into the node and build image
Suppose you have an definition file, for example for the hello world application "lolcow":
snellius paulm@int3 13:15 ~$ ssh srv2 # Build the container, NOTE THE NECESSARY --fakeroot ARGUMENT snellius paulm@srv2 13:16 ~$ apptainer build --fakeroot myimage.sif myimage.def WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process INFO: Starting build... INFO: Using cached image INFO: Verifying bootstrap image /root/.singularity/cache/library/sha256.cec569332824fa2ed2c222d52c3923a26a6b3ef493ef7cf5f485401afd1c40a0 INFO: Running setup scriptlet ... INFO: Creating SIF file... INFO: Build complete: myimage.sif
Test the image
When the container was successfully built, test it to make sure no more build script changes and image rebuilds are needed (the latter would best be done on the current cbuild node)
snellius paulm@srv2 13:20 ~$ apptainer shell myimage.sif Singularity> cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.1 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic Singularity> exit
When satified, close the connection to the cbuild node
snellius paulm@srv2 13:20 ~$ exit Connection to srv2 closed. # And optionally cancel the cbuild allocation snellius paulm@int3 13:20 ~$ scancel 1197343 salloc: Job allocation 1197343 has been revoked.
Using a SLURM batch job
The second option is to use a regular job script, containing the necessary commands to build the Singularity image:
snellius paulm@int3 13:42 ~/examples/singularity$ cat build.job #!/bin/sh #SBATCH -p cbuild #SBATCH -t 2:00:00 # Explicitly set the temporary directory to use (see below) export APPTAINER_TMPDIR=/tmp # Note the necessary --fakeroot option apptainer build --fakeroot -B/dev/shm:/tmp myimage.sif myimage.def # Submit the job snellius paulm@int3 13:42 ~$ sbatch build.job Submitted batch job 1197730 # Wait for the batch job to finish... # Check job output for errors snellius paulm@int3 13:47 ~$ cat slurm-1197730.out WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process INFO: Starting build... INFO: Using cached image INFO: Verifying bootstrap image /root/.singularity/cache/library/sha256.cec569332824fa2ed2c222d52c3923a26a6b3ef493ef7cf5f485401afd1c40a0 INFO: Running setup scriptlet + touch /tmp/paulmNRDo/build-temp-319223399/rootfs/file2 INFO: Running post scriptlet + apt-get update ... INFO: Creating SIF file... INFO: Build complete: myimage.sif snellius paulm@int3 13:47 ~$ ls -l myimage.sif -rwxr-x--- 1 paulm paulm 42606592 Jul 5 13:43 myimage.sif
When the job successfully finishes it will have produced the Singularity image, which you can then use further.
Error with temporary files
As noted in the job script above we need to specify a different directory for temporary files, as the default location on /scratch-local
does not work for Singularity (more specifically, the issue is that /scratch-local
is a network-mounted file sytem).
If you forget to set the SINGULARITY_TMPDIR
correctly in the job script you will get an error like this:
FATAL: While performing build: conveyor failed to get: while inserting base environment: build: failed to stat rootPath: stat /scratch-local/paulm.1197708/build-temp-929560850/rootfs: no such file or directory
Get and run an image from Docker Hub
It is possible to convert a Docker container to the Singularity format, make sure you use the cbuild
partition on Snellius for this. Otherwise you will run into errors related to missing privileges. E.g. pull in an image from Docker Hub with
apptainer pull docker://godlovedc/lolcow
and then run it with
apptainer run lolcow.sif
To run a container interactively, pull in an image with e.g.
apptainer pull docker://fedora:latest
And then enter the container with
apptainer shell fedora_latest.sif
Or run a single command with
apptainer run fedora_latest.sif cat /etc/os-release
By default, the $HOME directory will be bound to your image. Changes made inside the image to the files that are bound also take effect outside the container. Deleting, creating and modifying files in your image's home directory removes also the files from the host itself.
Changes inside the image made to the image itself (not to the directories that are bound) will not be saved and you will not have root rights inside the container. To make permanent adjustments continue reading in the post hoc adjustments section below.
Build an image with a Singularity definition file
Convert from local existing Docker images.
Docker is not installed or used on Snellius. But the following is possible when Docker and Singularity are installed on a local host. Current local Docker images can be shown with sudo docker images
where you can find the ID or IMAGE ID
$ sudo docker images Password: REPOSITORY TAG IMAGE ID CREATED SIZE tensorflow/tensorflow latest 0bb45d441a4b 6 days ago 1.15 GB singularityware/docker2singularity latest 9a621f249838 3 weeks ago 101 MB asciinema2gif latest 386c8b5977de 3 weeks ago 56.2 MB
To convert the image you need to set up a local Docker registry
sudo docker run -d -p 5000:5000 --name registry registry:2
tag the wanted images and push it to the registry
sudo docker image tag tensorflow/tensorflow localhost:5000/mytensorflow sudo docker push localhost:5000/mytensorflow
Now you can use singularity to pull the images from your private local registry. The registry has no encryption and we must tell singularity to ignore lack of https with the prefix "SINGULARITY_NOHTTPS=true"
sudo SINGULARITY_NOHTTPS=true apptainer build mytenserflow.img docker://localhost:5000/mytensorflow
Stop the docker registry and clean up
sudo docker container stop registry && docker container rm -v registry
Post hoc image adjustments
When you need to install additional software or change some settings you can execute commands using shell commands inside the image.
Keep in mind that you will need root permissions for these operations. Which you do not have on Snellius.
The first step is to convert the compressed image into a uncompressed format in a folder structure (called a sandbox)
sudo apptainer build --sandbox sandbox ubuntu-latest.simg
To keep the image persistent use the --writeable
option. e.g.
sudo apptainer shell --writable sandbox/
Exiting the image can be done by exiting the shell with the exit
command.
After editing you compress the sandbox to ensure portability and ease of use with
sudo apptainer build myubuntu.simg sandbox/
Fine-tune for NIKHEF systems
To ensure you can reach data in scratch on NIKHEF systems while working on the grid you need to create a directory with the same name inside your image. This is also done with the following one-liner:
sudo apptainer exec --writable example.img mkdir -p /tmpdir /cvmfs
Upload your image to our systems
After bootstrap has been completed on your system and tests have been done locally it is time to move the image to one of the SURF systems. The various systems have different best practices on where to put your image.
From your local system, you can do a SCP to Snellius or Grid with the following command the image will be placed in your home directory.
scp example.img username@snellius.surf.nl:~/
Or while using the Grid distribute it via Softdrive (cvmfs):
scp example.img softdrive.grid.sara.nl:/cvmfs/softdrive.nl/<username>/.
And publish with with
ssh softdrive.grid.sara.nl publish-my-softdrive
Use your singularity image
Snellius
First create a directory on the scratch-shared part of the scratch file system ( NOTE: of course there are a lot of other ways to use your image, we just give one example ).
SCRATCH_SHARED_DIR=$(mktemp -d -p /scratch-shared)
Copy the image to the newly created scratch shared directory.
cp /place/where/you/store/image.img ${SCRATCH_SHARED_DIR}
Go into the newly created directory
cd ${SCRATCH_SHARED_DIR}
Start an interactive container sesion (you need to be on a compute node in order to access singularity).
apptainer shell --pwd $PWD ${SCRATCH_SHARED_DIR}/test.img
Grid
When you start a job you start by default in the scratch dir ($TMPDIR
) and there is no need to switch to another directory. Images are automaticly cached by the cvmfs filesystem and there is no need to copy them to the scratch ($TMPDIR
)
apptainer exec --pwd $PWD /cvmfs/softdrive.nl/<username>/example.img echo "hello world"
FAQ
How do I set the $PATH
variable?
The $PATH
variable is taken from host environment. You can add a path to $PATH with
export PATH=/test/:$PATH sudo apptainer exec example.img echo $PATH
To make the PATH persistent in your image, add the export PATH line to "/environment" inside the container
My image is too large, can I make it smaller?
This is most likely caused by the software that was needed to build the software. Think of compilers, development headers and source code.
To make the image smaller you can best uninstall these packages and source code compress the package. Hereby an example which works on a ubuntu based container.
Detect and remove the largest packages
Before we can edit the image we need to convert it to an editable sandbox format
sudo apptainer build --sandbox sandbox ubuntu-latest.simg
Then bash inside your sandbox with sudo rights:
sudo apptainer shell --writable sandbox
To detect the largest packages we run the following one-liner which prints package size and sort them by size (from small to big)
dpkg-query -W --showformat='${Installed-Size;10}\t${Package}\n' | sort -k1,1n
Then we select the big package, which are not needed at runtime. This depends on your software stack but in general, it is save to remove gcc, clang, and *-dev packages.
For instance:
sudo apt remove gcc *-dev
Also, remove the unneeded old packages and removing the cache of apt might help to clean the container.
sudo apt autoremove sudo apt clean
Compress the image
After cleaning the container can converted back to a compressed format with
sudo apptainer build myubuntu-small.simg sandbox/
Why is the --pwd $PWD
option necessary?
By default, Singularity makes the current working directory in the container the same as on the host. For resolving the current working directory, Singularity looks up the physical absolute path (see man pwd
for more info). However, many directories on our systems are symbolic links and the current working directory would resolve different than expected. This results that your files are not were you expected them to be (combined with some warning messages).
I do not like Docker, is there an alternative?
Yes, there is! You can write a Singularity bootstrap file (or convert Dockerfile to Singularity bootstrap).
A singularity bootstrap file is a recipe to create a singularity image (or Singularities counterpart of a Dockerfile).
Information about writing a Singularity bootstrap file can be found at https://www.sylabs.io/guides/3.0/user-guide/definition_files.html.
You can use Singularity build command to convert containers between the formats supported by Singularity.
When I use Singularity, I get an "LD_PRELOAD error". Is this affecting my runs and how can avoid it?
This error is caused by the XALT tool we use to monitor software usage on the system. The error does not impact the correct execution of Singularity, but it is printed every time you run a command interactively within the container.
In order to stop Singularity to print this error, you need to unset the LD_PRELOAD variable outside the container with the command:
unset LD_PRELOAD
This will prevent XALT to track your usage, so please use this only if directly affect your work with Singularity.