Availability
Snellius
On Snellius currently only Singularity is installed.
Lisa
On Lisa both Singularity and Apptainer are installed, with the former accessible through the singularity
command, and latter through the apptainer
command . Note that Apptainer contains a compability script called run-singularity
that will actually call the apptainer
command.
Versions
Apptainer (and Singularity) is a software system that is in under active development. SURF aims to keep the version of Singularity/Apptainer on our systems at the latest stable release. To keep our documentation up to date we have decided to give some general guidelines and do not provide in-depth generic information, but there are links below to find this information.
Warning |
---|
We guarantee no backward compatibility when Singularity/Apptainer is upgraded. |
Requirements
Using a Singularity/Apptainer container involves two steps:
- Preparing the container image, by downloading or building it.
- Running the container on a host system
In order to build your own container images you will need to have Singularity/Apptainer installed on a Linux system where you have root (i.e. superuser) rights. Note that building the container image therefore isn't possible on our systems, as regular users do not have root access. However, on Snellius the cbuild
partition can be used instead, see below.
Once you have a Singularity/Apptainer container image you can then run it on Snellius, Lisa or the Grid system. You will need regular SSH access to one of these systems in order to do so.
Building the container
Using your local installation of Singularity/Apptainer
You can use Singularity/Apptainer on your local Linux system to build a container image. How you install Singularity/Apptainer locally depends on the Linux distribution you're using. For example, on Fedora you can install Singularity with the command sudo dnf install singularity
, on Arch Linux with sudo pacman -S singularity
(and similar commands for installing Apptainer). Ubuntu does not have Singularity included in the main software repositories. For manual builds of the Singularity or Apptainer software (note: the software, not the container!), you can check the here (Singularity) and here (Apptainer).
You can install Singularity/Apptainer for example on your own machine, or on a virtual environment, either at your local computer, at SURF HPC Cloud, or at another Cloud resource provider. Administrative privileges are necessary to install the software.
Container build nodes (Snellius)
On Snellius we have a Singularity build partition called cbuild
. The nodes in this partition are configured to allow building of Singularity containers without root privileges. There are two ways to build a Singularity container using a cbuild
node, we describe both below.
Using interactive SSH access
Use the salloc
command to reserve a cbuild
node, and then SSH into that node. You can then interactively run the Singularity/Apptainer commands you need to build the container image, specifying the --fakeroot
option:
Code Block |
---|
# Reserve a node in the cbuild partition for 2 hours snellius paulm@int3 13:14 ~$ salloc -p cbuild -t 2:00:00 salloc: Pending job allocation 1197343 salloc: job 1197343 queued and waiting for resources <wait for node to become available> salloc: Granted job allocation 1197343 salloc: Waiting for resource configuration salloc: Nodes srv2 are ready for job # We got assigned node srv2, SSH into it snellius paulm@int3 13:15 ~$ ssh srv2 paulm@srv2's password: <...> # Build the container, NOTE THE NECESSARY --fakeroot ARGUMENT snellius paulm@srv2 13:16 ~$ singularity build --fakeroot myimage.sif buildscript WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process INFO: Starting build... INFO: Using cached image INFO: Verifying bootstrap image /root/.singularity/cache/library/sha256.cec569332824fa2ed2c222d52c3923a26a6b3ef493ef7cf5f485401afd1c40a0 INFO: Running setup scriptlet ... INFO: Creating SIF file... INFO: Build complete: myimage.sif # When the container was successfully built, test it to make sure # no more build script changes and image rebuilds are needed # (the latter would best be done on the current cbuild node) snellius paulm@srv2 13:20 ~$ singularity shell myimage.sif Singularity> cat /etc/os-release NAME="Ubuntu" VERSION="18.04.1 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.1 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionic Singularity> exit # When satisfied, close the connection to the cbuild node snellius paulm@srv2 13:20 ~$ ^D Connection to srv2 closed. # And optionally cancel the cbuild allocation snellius paulm@int3 13:20 ~$ scancel 1197343 salloc: Job allocation 1197343 has been revoked. # You can now use the container image snellius paulm@int3 13:21 ~$ |
Using a SLURM batch job
The second option is to use a regular job script, containing the necessary commands to build the Singularity image:
Code Block |
---|
snellius paulm@int3 13:42 ~/examples/singularity$ cat build.job #!/bin/sh #SBATCH -p cbuild #SBATCH -t 2:00:00 # Explicitly set the temporary directory to use (see below) export SINGULARITY_TMPDIR=$(mktemp -d /tmp/paulmXXXX) # Note the necessary --fakeroot option singularity build --fakeroot myimage.sif myimage.def # Submit the job snellius paulm@int3 13:42 ~$ sbatch build.job Submitted batch job 1197730 # Wait for the batch job to finish... # Check job output for errors snellius paulm@int3 13:47 ~$ cat slurm-1197730.out WARNING: 'nodev' mount option set on /tmp, it could be a source of failure during build process INFO: Starting build... INFO: Using cached image INFO: Verifying bootstrap image /root/.singularity/cache/library/sha256.cec569332824fa2ed2c222d52c3923a26a6b3ef493ef7cf5f485401afd1c40a0 INFO: Running setup scriptlet + touch /tmp/paulmNRDo/build-temp-319223399/rootfs/file2 INFO: Running post scriptlet + apt-get update ... INFO: Creating SIF file... INFO: Build complete: myimage.sif snellius paulm@int3 13:47 ~$ ls -l myimage.sif -rwxr-x--- 1 paulm paulm 42606592 Jul 5 13:43 myimage.sif |
When the job successfully finishes it will have produced the Singularity image, which you can then use further.
Warning | ||
---|---|---|
| ||
As noted in the job script above we need to specify a different directory for temporary files, as the default location on If you forget to set the
|
Get and run an image from Docker Hub
It is possible to convert a Docker container to the Singularity format, make sure you use the cbuild
partition on Snellius for this. Otherwise you will run into errors related to missing privileges. E.g. pull in an image from Docker Hub with
Code Block | ||
---|---|---|
| ||
singularity pull docker://godlovedc/lolcow |
and then run it with
Code Block |
---|
singularity run lolcow.sif |
To run a container interactively, pull in an image with e.g.
Code Block | ||
---|---|---|
| ||
singularity pull docker://fedora:latest |
And then enter the container with
Code Block |
---|
singularity shell fedora_latest.sif |
Or run a single command with
Code Block |
---|
singularity run fedora_latest.sif cat /etc/os-release |
By default, the $HOME directory will be bound to your image. Changes made inside the image to the files that are bound also take effect outside the container. Deleting, creating and modifying files in your image's home directory removes also the files from the host itself.
Changes inside the image made to the image itself (not to the directories that are bound) will not be saved and you will not have root rights inside the container. To make permanent adjustments continue reading in the post hoc adjustments section below.
Build an image with a Singularity definition file
Convert from local existing Docker images.
Docker is not installed or used on Snellius and Lisa. But the following is possible when Docker and Singularity are installed on a local host. Current local Docker images can be shown with sudo docker images
where you can find the ID or IMAGE ID
Code Block | ||
---|---|---|
| ||
$ sudo docker images Password: REPOSITORY TAG IMAGE ID CREATED SIZE tensorflow/tensorflow latest 0bb45d441a4b 6 days ago 1.15 GB singularityware/docker2singularity latest 9a621f249838 3 weeks ago 101 MB asciinema2gif latest 386c8b5977de 3 weeks ago 56.2 MB |
To convert the image you need to set up a local Docker registry
Code Block | ||
---|---|---|
| ||
sudo docker run -d -p 5000:5000 --name registry registry:2 |
tag the wanted images and push it to the registry
Code Block | ||
---|---|---|
| ||
sudo docker image tag tensorflow/tensorflow localhost:5000/mytensorflow sudo docker push localhost:5000/mytensorflow |
Now you can use singularity to pull the images from your private local registry. The registry has no encryption and we must tell singularity to ignore lack of https with the prefix "SINGULARITY_NOHTTPS=true"
Code Block | ||
---|---|---|
| ||
sudo SINGULARITY_NOHTTPS=true singularity build mytenserflow.img docker://localhost:5000/mytensorflow |
Stop the docker registry and clean up
Code Block |
---|
sudo docker container stop registry && docker container rm -v registry |
Post hoc image adjustments
When you need to install additional software or change some settings you can execute commands using shell commands inside the image.
Warning |
---|
Keep in mind that you will need root permissions for these operations. Which you do not have on Snellius/Lisa. |
The first step is to convert the compressed image into a uncompressed format in a folder structure (called a sandbox)
Code Block |
---|
sudo singularity build --sandbox sandbox ubuntu-latest.simg |
To keep the image persistent use the --writeable
option. e.g.
Code Block |
---|
sudo singularity shell --writable sandbox/ |
Exiting the image can be done by exiting the shell with the exit
command.
After editing you compress the sandbox to ensure portability and ease of use with
Code Block |
---|
sudo singularity build myubuntu.simg sandbox/ |
Fine-tune for NIKHEF systems
To ensure you can reach data in scratch on NIKHEF systems while working on the grid you need to create a directory with the same name inside your image. This is also done with the following one-liner:
Code Block |
---|
sudo singularity exec --writable example.img mkdir -p /tmpdir /cvmfs |
Upload your image to our systems
After bootstrap has been completed on your system and tests have been done locally it is time to move the image to one of the SURFsara systems. The various systems have different best practices on where to put your image.
From your local system, you can do a SCP to Snellius, Lisa, or Grid with the following command the image will be placed in your home directory.
Code Block | ||||
---|---|---|---|---|
| ||||
scp example.img username@snellius.surf.nl:~/ |
Code Block | ||||
---|---|---|---|---|
| ||||
scp example.img username@lisa.surfsara.nl:~/ |
Or while using the Grid distribute it via Softdrive (cvmfs):
Code Block | ||||
---|---|---|---|---|
| ||||
scp example.img softdrive.grid.sara.nl:/cvmfs/softdrive.nl/<username>/. |
And publish with with
Code Block | ||||
---|---|---|---|---|
| ||||
ssh softdrive.grid.sara.nl publish-my-softdrive |
Use your singularity image
Snellius
First create a directory on the scratch-shared part of the scratch file system ( NOTE: of course there are a lot of other ways to use your image, we just give one example ).
Code Block | ||
---|---|---|
| ||
SCRATCH_SHARED_DIR=$(mktemp -d -p /scratch-shared) |
Copy the image to the newly created scratch shared directory.
Code Block | ||
---|---|---|
| ||
cp /place/where/you/store/image.img ${SCRATCH_SHARED_DIR} |
Go into the newly created directory
Code Block | ||
---|---|---|
| ||
cd ${SCRATCH_SHARED_DIR} |
Start an interactive container sesion (you need to be on a compute node in order to access singularity).
Code Block | ||
---|---|---|
| ||
singularity shell --pwd $PWD ${SCRATCH_SHARED_DIR}/test.img |
LISA
Jump into the scratch directory
Code Block | ||
---|---|---|
| ||
cd $TMPDIR |
copy your image to the scratch space
Code Block | ||
---|---|---|
| ||
cp ~/example.img . singularity exec --pwd $PWD example.img echo "hello world" |
Grid
When you start a job you start by default in the scratch dir ($TMPDIR
) and there is no need to switch to another directory. Images are automaticly cached by the cvmfs filesystem and there is no need to copy them to the scratch ($TMPDIR
)
Code Block | ||
---|---|---|
| ||
singularity exec --pwd $PWD /cvmfs/softdrive.nl/<username>/example.img echo "hello world" |
FAQ
How do I set the $PATH
variable?
The $PATH
variable is taken from host environment. You can add a path to $PATH with
Code Block | ||
---|---|---|
| ||
export PATH=/test/:$PATH sudo singularity exec example.img echo $PATH |
To make the PATH persistent in your image, add the export PATH line to "/environment" inside the container
My image is too large, can I make it smaller?
This is most likely caused by the software that was needed to build the software. Think of compilers, development headers and source code.
To make the image smaller you can best uninstall these packages and source code compress the package. Hereby an example which works on a ubuntu based container.
Detect and remove the largest packages
Before we can edit the image we need to convert it to an editable sandbox format
Code Block | ||
---|---|---|
| ||
sudo singularity build --sandbox sandbox ubuntu-latest.simg |
Then bash inside your sandbox with sudo rights:
Code Block | ||
---|---|---|
| ||
sudo singularity shell --writable sandbox |
To detect the largest packages we run the following one-liner which prints package size and sort them by size (from small to big)
Code Block | ||
---|---|---|
| ||
dpkg-query -W --showformat='${Installed-Size;10}\t${Package}\n' | sort -k1,1n |
Then we select the big package, which are not needed at runtime. This depends on your software stack but in general, it is save to remove gcc, clang, and *-dev packages.
For instance:
Code Block | ||
---|---|---|
| ||
sudo apt remove gcc *-dev |
Also, remove the unneeded old packages and removing the cache of apt might help to clean the container.
Code Block | ||
---|---|---|
| ||
sudo apt autoremove sudo apt clean |
Compress the image
After cleaning the container can converted back to a compressed format with
Code Block | ||
---|---|---|
| ||
sudo singularity build myubuntu-small.simg sandbox/ |
Why is the --pwd $PWD
option necessary?
By default, Singularity makes the current working directory in the container the same as on the host. For resolving the current working directory, Singularity looks up the physical absolute path (see man pwd
for more info). However, many directories on our systems are symbolic links and the current working directory would resolve different than expected. This results that your files are not were you expected them to be (combined with some warning messages).
I do not like Docker, is there an alternative?
Yes, there is! You can write a Singularity bootstrap file (or convert Dockerfile to Singularity bootstrap).
A singularity bootstrap file is a recipe to create a singularity image (or Singularities counterpart of a Dockerfile).
Information about writing a Singularity bootstrap file can be found at https://www.sylabs.io/guides/3.0/user-guide/definition_files.html.
You can use Singularity build command to convert containers between the formats supported by Singularity.
When I use Singularity, I get an "LD_PRELOAD error". Is this affecting my runs and how can avoid it?
This error is caused by the XALT tool we use to monitor software usage on the system. The error does not impact the correct execution of Singularity, but it is printed every time you run a command interactively within the container.
In order to stop Singularity to print this error, you need to unset the LD_PRELOAD variable outside the container with the command:
Code Block |
---|
unset LD_PRELOAD |
This will prevent XALT to track your usage, so please use this only if directly affect your work with Singularity.