This section explains how to access and make use of the OSSC secure environment. The content of this section assume you already have an account on the CBS RA environment and you know how to connect to it.


Entering the secure HPC environment

The OSSC environment cannot be accessed directly. All connections to the OSSC go through the CBS Remote Access environment.

There are two steps to login to the OSSC:

  1. Login to the CBS RA environment: see the instructions on the CBS website.
  2. Login to the OSSC from the RA environment: In the RA connect to the OSSC at SURF by clicking on the application “Putty naar SURF”. 


Access to data within the secure environment

Transfer data to OSSC

Upload CBS data from CBS RA to OSSC 

Data can be transferred from CBS RA to the OSSC using WinSCP in the RA environment. The transfer of the data is done by the users themselves. For support on this you can contact Microdata@cbs.nl

Upload external research data to OSSC

Transfer data from outside RA to the OSSC

  • If all files are smaller than 1 GB, upload it to CBS RA via the CBS upload portal (see instructions for importing your own dataset in CBS RA). Then transport it from RA to OSSC with WinSCP.

  • If the files are between 1 GB to 5 GB, first contact the CBS RA (microdata@cbs.nl) and they can enlarge the upload to maximum 5 GB. You can then still use the upload portal to upload your files to CBS RA and then OSSC.
  • Files larger than 5 GB are too big for the CBS upload portal, and the upload is done to SURF directly. For more information, please contact us through our Servicedesk portal, using the keyword "OSSC" in the summary of the ticket and selecting "ODISSEI Secure Supercomputer" as service.

Upload external confidential/sensitive data set into OSSC: if your data presents confidentiality requirements such that it is not desired that CBS personnel have access to those data at all, the data needs to be transferred into the OSSC without transiting through RA. In order to make sure that the true data are not accessible for CBS, and the identities behind CBS Microdata remain hidden for both SURF data managers and researchers, a so called Trusted Third Party procedure will be applied before making the data available in the OSSC. Please contact us through our Servicedesk portal for more information about the Trusted Third Party procedure, using "ODISSEI Secure Supercomputer" as service.

Transfer data from OSSC 

From OSSC, data can only be transferred to the relevant project folder in CBS RA environment. Data and output can be transferred from the OSSC back to CBS RA using WinSCP within the RA environment. Then CBS rules apply to move data out of CBS RA (see CBS export of information). 

Data in the OSSC are not backed up

Due to security reasons, all data in the OSSC is stored in a filesystem isolated from the rest of the supercomputer. As a consequence, it is at the moment not possible to do back up user's data on another filesystem of storage facility at SURF.

For users, this means that a hardware failure of the filesystem could potentially cause data to be lost.

We strongly recommend users to regularly transfer the output of their runs on the OSSC to CBS RA for back up to mitigate the risk of data loss.





Prepare and run your simulations

The OSSC platform is based on customisable virtualised clusters that are deployed on the existing supercomputer Snellius. The virtual clusters are provisioned using PCOCC (Private Cloud on a Compute Cluster), through which we can host clusters of virtual machines on the existing compute nodes of Snellius.

The size, type of nodes and duration of each virtual cluster can be customised by the user who can choose among the compute nodes available on Snellius (see description here). The allocated resources will be available exclusively to the user within the duration of the request (max 5 days), and users can submit HPC jobs using the SLURM scheduler to run on the secure virtual cluster.

In this section we will describe the main difference between the OSSC environment and Snellius, how to setup the virtual cluster and how to run your simulations on the HPC resources.

What are the differences between Snellius and the OSSC environment?

When a user connects to the OSSC environment (see Entering the secure HPC environment), he/she lands on a dedicated Snellius node (8 CPU(s), AMD EPYC 7F32) which serves as secure work environment. This is meant primarily for job preparation, compilation of software, light input preparation and analysis. From here the user can also submit his jobs to the secure virtual cluster, the nodes dedicated to the computational intensive tasks. The fix costs for keeping the working environment up and running is 70,080 SBUs per year.

On the OSSC environment, the user is offered an interface almost completely similar to the one on Snellius: the user interacts with the system through a command line interface (CLI) and can use the SLURM scheduler to submit and manage his jobs. A detailed description of how to use the Snellius system can be found here.  

However, because the OSSC environment runs within a secure virtualised environment, there are some differences with the regular user experience on Snellius:

  • The secure virtual cluster is not accessible by default, but needs to be requested by the user in advance (see Setup the compute environment below).  Only when the reservation is active can the user access the compute nodes from within the work environment and submit jobs.

  • Any display protocol is blocked from the OSSC. This means that it is not possible to use any graphical interface on the work or compute environment.

  • It is not possible to copy and paste text from the OSSC terminal.

  • The OSSC is isolated from the internet and therefore it is not possible to download files or access external databases. Any file needed, needs to be transferred through the CBS RA environment (see Access to data within the secure environment).

  • The filesystem in the OSSC environment is different from the one available in Snellius (see Snellius filesystems description). The main difference for the users is that the scratch space is not accessible, but the home directory in the OSSC is mounted on the GPFS filesystem which provides high performances in both scalability and storage capacity and can be used for I/O intensive jobs. Thus there is no distinction between home directory and project space, all users in a project share one project space, and the only quota that applies is the total quota for the project.
    The default quota of the project space is 1TB. 

The OSSC does NOT make regular backups of your data therefore any file lost within the environment cannot be retrieved by the system administrator.


Setup the compute environment

As described above, the secure virtual environment is not available by default within the OSSC environment, but can be requested on demand by the users.

The user can reserve compute nodes (any type of compute node available on Snellius) for a minimum of 1 day up to maximum 5 days. A complete description of the type of compute nodes available on the Snellius system can be found here
The reservation needs to be filled in at least 5 business days before the starting date to allow to free the requested resources without impacting the work of other users on the system.

At the moment it is not possible to combine different nodes types in the same compute environment.


In order to request the reservation of the computational resources, users need to fill in the form that can be downloaded below and send it to our Service Desk  selecting "ODISSEI Secure Supercomputer" as type of service.

The cost for the whole duration of the reservation is deducted from the project allocation (see Obtain an allocation on the system), but the jobs run within the reservation does not count on the total budget. For example, a request for one compute node on Snellius (128 cores)  for 5 days will cost 15,360 cores/hour (128*24h*5days) independently of the number of jobs run.


Running jobs with SLURM

In the OSSC environment, you will interact with the Snellius system through a command line interface (CLI). This is the only interface available and it is important that you are familiar with essential Linux commands to be able to use the system. If you are not familiar with the Linux command line, we suggest you look into a UNIX tutorial. The UNIX Tutorial for Beginners is a good starting point; if you want to get proficient with bash, our default login shell, try the Advanced Bash-Scripting Guide. Within the OSSC you can use the SLURM scheduler to submit, manage and control your batch jobs on the virtual cluster. All the information to get started with SLURM on Snellius can be found in the Snellius user's guide.

SURF organises trainings on how to use Snellius efficiently (see our webinar "Introduction Supercomputing"). We advise to follow one of the regular courses if you do not have experience with Linux or in working with an HPC system.

In the OSSC environment, same as on Snellius, you can use SLURM to check the available compute nodes. The sinfo command shows information about the partitions and number of nodes available.

$ sinfo 
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
work_env*    up 5-00:00:00      1   idle ossc9999vm0
comp_env     up 5-00:00:00      2   idle [ossc9999vm1,ossc9999vm2]

As you can see from the example above, in the OSSC environment there are two accessible partitions:

  • the work_env 
    Partition include only the secure work environment (4cpus) and it is meant to test small batch jobs.

  • the comp_env 
    Partition which contains all the compute nodes (2 nodes in the example above) within the reservation.

As in a regular batch system, you can run jobs on the compute nodes using a job script which contains the specifications of the job requirements (number of nodes, expected runtime, etc.) and the commands you want to execute on the nodes.

The example below shows a typical job script for a 1h job running 256 tasks on 2 nodes (on the "comp_env" partition). In the example, the SLURM srun command is used to execute the parallel application on the compute nodes.

#!/bin/bash
#SBATCH -N 2
#SBATCH -n 256
#SBATCH -p comp_env
#SBATCH -t 1:00:00
#SBATCH -j test

module purge
module load 2020
module load intel/2020a

cd $HOME/workdir

srun my-mpi-app.x 

Once saved to a file, the job can be then submitted using the sbatch command:

$ sbatch jobscript
Submitted batch job 13
$

This will assign to the job a unique "jobid" (13 in the example above) and execute the commands specified in the requested resources. 

When you submit your job to the queue, please be careful to time your job to complete before the expiration time of the reservation. When the reservation ends, the compute_env will be detached from the OSSC environment and any job running will be terminated. 


You can then control the status of your jobs in the queue with the squeue command:

$ squeue
             JOBID PARTITION     NAME     USER ST       TIME  NODES NODELIST(REASON)
                13  work_env jobscrip ossc9999  R       0:01      2 [ossc9999vm1,ossc9999vm1]

and if needed cancel the job with the scancel command and the jobid of the target run:

$ scancel 13

More information and additional functionalities of the SLURM scheduler can be found on the online manual as well as on our Snellius user guide.

One difference of the OSSC with Snellius, which can impact how users run jobs on the system, is that the TMPDIR and SCRATCHDIR variable are not set by default on the system. If needed, users can create their own user-managed temporary directory, e.g. in $HOME/tmp, and define TMPDIR in their .bashrc.


How to use software and tools in OSSC

In the OSSC, just as on Snellius, software are managed using modules. Modules provide an easy mechanism for updating a user’s environment, notably the PATH, MANPATH, CPATH, and LD_LIBRARY_PATH environment variables. The advantage of the modules approach is that the user is no longer required to explicitly specify paths for different software versions, nor to try to keep the related environment variables coordinated. With the modules approach, users simply load and unload modules to control their environment.

For more information on the module environment on Snellius and the different software already installed on the system, consult our documentation on the software available on Snellius.

Because of the secure environment in which OSSC runs, some of the applications installed on Snellius may not work. If you have any request for additional software that is not available on the system, fill in the form below and send it to our Service Desk using "ODISSEI Secure Supercomputer" as service.

In the OSSC, just as on Snellius, software are managed using modules. Modules provide an easy mechanism for updating a user’s environment, notably the PATH, MANPATH, CPATH, and LD_LIBRARY_PATH environment variables. The advantage of the modules approach is that the user is no longer required to explicitly specify paths for different software versions, nor to try to keep the related environment variables coordinated. With the modules approach, users simply load and unload modules to control their environment.

For more information on the module environment on Snellius and the different software already installed on the system, consult our documentation on the software available on Snellius.

Because of the secure environment in which OSSC runs, some of the applications installed on Snellius may not work. If you have any request for additional software that is not available on the system, fill in the form below and send it to our Service Desk using "ODISSEI Secure Supercomputer" as service.


Project termination and removal of data

Two weeks before the end of the project, you will receive a reminder of the end date by email.

All data you wish to keep after the end of project have to be transported from the OSSC to folder “FromOSSC” in CBS RA using WinSCP.

At 23:59 on the end date of your project, your login and your virtual environment on the OSSC will then disabled.

After the end of the project and after receiving confirmation that the data has been transfered to CBS RA in good order, your data in in the OSSC environment will be deleted.

  • No labels