WARNING: mixing the EESSI software environment and our local software environments (from e.g. the 2022 , 2023  and 2024  modules) can lead to unpredictable results and is not recommended! If you want to use modules from e.g. the local software environment after using those from the EESSI software environment, you can run module purge  before loading the local modules.

EESSI, short for the European Environment for Scientific Software Installations, is a collaborated project between different European partners in HPC community to build a common stack of scientific software installations for HPC systems and beyond. Through the EESSI project, a shared stack of scientific software installations is distributed via CVMFS.

The official website of EESSI can be found here. To get an overview of all the available software in EESSI per specific CPU target, please check EESSI available software. 

At SURF, the EESSI software environment is available on SURF Research Cloud (RC component EESSI Client), Spider (https://doc.spider.surfsara.nl/en/latest/Pages/software/eessi.html) and Snellius. This makes it easy to move scientific workflows from one of these systems to another, since you'll have the same (EESSI) software stack everywhere. Note that it is also possible to run EESSI on your local machine (natively, or through the EESSI container). If you have Linux, you can follow the documented steps. For Windows users, you can follow these instructions to use it within WSL (the Windows Subsystem for Linux). For MacOS users, can follow these instructions to run it within Lima (which allows you to run a Linux virtual machine). This provides you with the unique opportunity to use the same software stack on your local machine, as you do on Snellius.

This page describes the use of the EESSI software environment on Snellius, though it's use on Spider is very similar.

Basic usage

To access the EESSI software stack, you can load the EESSI module (with the desired version - at the time of writing, the newest is 2023.06 ):

module load EESSI/2023.06

This should print

EESSI/2023.06 loaded successfully

Verbose output on detected CPU architecture

If you want some more verbose output, you can set the EESSI_DEBUG_INIT  variable:

$ EESSI_DEBUG_INIT=1 module load EESSI/2023.06
Got archdetect CPU options: x86_64/amd/zen2:x86_64/generic
Selected archdetect CPU: x86_64/amd/zen2
Got archdetect accel option:
Setting EPREFIX to /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64
Setting EESSI_CPU_FAMILY to x86_64
Setting EESSI_SITE_SOFTWARE_PATH to /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2
Setting EESSI_SITE_MODULEPATH to /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all
Setting EESSI_SOFTWARE_SUBDIR to x86_64/amd/zen2
Setting EESSI_PREFIX to /cvmfs/software.eessi.io/versions/2023.06
Setting EPREFIX to /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64
Adding /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/bin to PATH
Adding /cvmfs/software.eessi.io/versions/2023.06/compat/linux/x86_64/usr/bin to PATH
Setting EESSI_SOFTWARE_PATH to /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2
Setting EESSI_MODULEPATH to /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all
Adding /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/modules/all to MODULEPATH
Adding /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod/lmodrc.lua to LMOD_RC
Setting LMOD_PACKAGE_PATH to /cvmfs/software.eessi.io/versions/2023.06/software/linux/x86_64/amd/zen2/.lmod
Adding /cvmfs/software.eessi.io/host_injections/2023.06/software/linux/x86_64/amd/zen2/modules/all to MODULEPATH
EESSI/2023.06 loaded successfully

This will show you exactly which CPU architecture was detected, and thus which optimized copy of the software stack will be loaded.

Checking available software

You can run the regular module commands to explore the EESSI software environment, such as 

module avail

and

module spider

for more module commands, see Environment Modules.

EESSI aims to make each module available for every compute architecture (CPU and GPU) it supports. However, that is not always possible (e.g. if the software itself doesn't support that architecture). You can view the full list of available software per architecture on https://www.eessi.io/docs/available_software/overview/.

Using EESSI in a batch script

If you want to use the EESSI software environment in a batch job, you'll have to include the initialization in the batch job as well. For example, to use the Python  module from the EESSI software environment:

#!/bin/bash
#SBATCH -n 1
#SBATCH -c 16
#SBATCH -p rome
#SBATCH -t 5:00

# Initialize the EESSI software environment
module load EESSI/2023.06

# load the modules in EESSI
module load Python/3.11.5-GCCcore-13.2.0

# You can check that EESSI is used:
var=$(which python3)
echo "I am using the python command from " $var

python3 --version

GPU support

To have (full) GPU support with EESSI, a hosting site needs to do some configuration (see https://www.eessi.io/docs/site_specific_config/gpu/). For example, it needs to configure it's location of their GPU drivers, as well as locally install the libraries from the CUDA SDK (e.g. CUDA, cuDNN) that is not allowed to be redistributed because of license constraints. Note that the local install of CUDA SDK components is only needed if you, as an end-user, want to compiler new  CUDA software. It is not needed for using CUDA-enabled modules from the EESSI software stack itself.

As a test of the GPU support, you can try to run the job

#!/bin/bash
#SBATCH -n 2
#SBATCH --ntasks-per-node 2
#SBATCH --gpus-per-node 2
#SBATCH -c 1
#SBATCH -p gpu_a100
#SBATCH -t 5:00

# Initialize the EESSI software environment
module load EESSI/2023.06

# Load the OSU micro benchmarks, which contains a GPU <=> GPU benchmark
module load OSU-Micro-Benchmarks/7.2-gompi-2023a-CUDA-12.1.1

# Run the latency benchmark between two GPUs
mpirun -np 2 osu_latency -d cuda D D

# Run a bandwidth benchmark between two GPUs
mpirun -np 2 osu_bw -d cuda D D

where you can alter which version of EESSI / OSU-Micro-Benchmarks is used, of course. This should show a latency between 1.5-2.5 microseconds between two GPUs (for the smallest message size), and a bandwidth that is around 85 GB/s (for the largest message size), indicating that the NVLINKs between the GPUs are being used correctly.

Note: as newer CUDA  versions are shipped in EESSI, we need to install those versions locally as well. If you get errors when trying to load the CUDA  or cuDNN  module from the EESSI software stack, please let us know. This usually means we need to re-run the configuration script to install these additional CUDA  and cuDNN   versions.

EESSI repositories

EESSI's main repository is software.eessi.io . However, there are additional repositories that serve a specific purpose. E.g. dev.eessi.io is to facilitate selected developers in deploying development builds of their software. And riscv.eessi.io  is a repository to test software for the RISC-V CPU architecture. You can check for available repositories in the EESSI documentation.

Adding software to the EESSI software stack

While EESSI offers a lot of software packages, it might not have the specific one you need. Since EESSI is a community project, you can make your own contribution to add software to the EESSI software stack. See the EESSI documentation on how to contribute.

Installing software on top of the EESSI software stack in your home folder (with EasyBuild)

Just like you can build on top of our local module stack (with eblocalinstall , see EasyBuild tutorial), you can also (locally) install additional modules on top of the EESSI software environment. See the EESSI documentation on how to do that.

Getting support

There are a few places where you can get help if you run into trouble with the EESSI software stack.

Since EESSI is a community project, we suggest you try to get support through the EESSI support portal first , unless you think your issue is specific to the deployment of EESSI on Snellius (e.g. you know that the same software from EESSI works on another system).



  • No labels