Both Snellius and Lisa contain nodes that are equipped with high-end GPUs. While the main purpose of these GPUs is to perform compute tasks they can also be used for (interactive) visualization, animation rendering or using a virtual remote desktop with 2D/3D OpenGL rendering capabilities.
The technology used to provide a remote desktop on these nodes is called VNC (Virtual Network Computing). This guide will describe how to set up a VNC virtual desktop and how connect to it from your own workstation or laptop. It also provides some more general information for using the GPUs in Snellius and Lisa for visualization purposes.
Compute nodes that do not contain GPUs can still be used for running a remote desktop, or even perform visualization. But since the visualization rendering will be done fully in software - instead of using GPU hardware - the performance will be much lower, especially for large datasets. But for running regular GUI applications, e.g. to set up simulation input or to do a quick plot of simulation output, such a non-GPU remote desktop will usually work fine.
Different VNC password on Snellius, compared to Cartesius
The password needed to log into the VNC server on Snellius has changed, compared to Cartesius. On both Snellius and Lisa the credentials used to log into the VNC server are your regular CUA username and password. There is no longer a separate VNC password.
A remote desktop with Paraview running on a Snellius GPU node
Overview of available GPU nodes for visualization
On Snellius 36 GPU nodes are available,
gcn36 . Each GPU node has 4 NVIDIA A100 GPUs. As with other compute notes, these nodes are only available through a SLURM job or reservation.
GPU nodes and gpu_vis partition not accessible by default
gpu_vispartition. See below for details on how to get access.
Depending on the E-INFRA/NWO grant you received you currently might or might not have access to the Snellius GPU nodes:
- If your granted proposal includes access to the GPU nodes then you can use them for visualization purposes without any further action.
- However, if you did not request explicit access to GPU nodes in your granted proposal then you will not have access to them, not even for visualization. In this case, you can request access to the
gpu_vispartition (see below) through our Servicedesk. This will provide access to a subset of the Snellius GPU nodes, specifically for visualization tasks.
Note that using a Snellius GPU node, either for computation or visualization, will be charged on your account, see here for more information. Using a GPU node is more expensive in terms of SBUs than a non-GPU node.
The gpu_vis partition
gpu_vis partition not accessible by default
As mentioned above, the GPU nodes and the
gpu_vis partition are not accessible by default, and access needs to be specifically requested.
The regular SLURM partitions for GPU-based computing on Snellius are less suitable for interactive visualization, as the waiting time for jobs in those partitions can be long and unpredictable. For visualization you usually want to work interactively for a relatively short time (and preferably during office hours). For this, there is a separate SLURM partition on Snellius aimed at doing interactive remote visualization called
gpu_vis partition on Snellius is aimed at performing relatively short (hours) interactive visualization tasks of data stored on Snellius. The
gpu_vis partition tries to improve the availability of GPU nodes for interactive visualization by using a fairly low maximum walltime. The limitations of the
gpu_vis partition are
- Maximum walltime of 1 day
- Maximum number of nodes per user (over all jobs in
gpu_vis) is 4
gpu_vis the other GPU-related partitions on Snellius and Lisa (e.g.
gpu on Snellius, or
gpu_titanrtx on Lisa) can also be used for doing remote visualization, but they might have a long waiting time for a job to start, or might have a maximum job run time that is too limiting.
On Lisa there are several different types of GPU nodes available, e.g. NVIDIA TITAN RTX, NVIDIA 1080 Ti, etc. See here for the most up-to-date description.
There currently are no special provisions in terms of queue setup for interactive visualization on Lisa.
In this section the high-level steps are described to start a remote visualization session on a Snellius GPU node. The workflow on Lisa is the same, except of course to use a Lisa login node to run these commands.
The sections below give more detail on each step. Please read the detailed steps when working with the remote visualization environment on Snellius/Lisa for the first time.
Step 1 - start a VNC server
First, start a VNC server on a Snellius GPU node using our
vnc_desktop script, and note the GPU node to connect to later (
gcn31 in this example):
Step 2 - connect with the VNC client
Now that a node was reserved and a VNC server started on it the next step is to use the VNC viewer on your local workstation/laptop and connect to the VNC server on the node. If you have a VNC viewer that supports the
-via option (see below) then setting up an SSH and tunnel connecting to the VNC server on the reserved node can be done in one step:
Otherwise, first set up the SSH tunnel manually:
Then start the VNC viewer locally and connect through the local endpoint of the tunnel:
On Lisa SSH tunneling is disabled by default. You can request it to be enabled for your login through our Service Desk
Step 3 - start your application
The final is to start the visualization application of interest. For this, open a terminal in the remote desktop (i.e. inside the VNC viewer). On Lisa using the top menu:
Applications -> System Tools -> MATE Terminal. On Snellius using the
Terminal Emulator menu item.
When running an application inside the VNC desktop on a GPU node it is important to load module
VirtualGL and start your application with
vglrun. The reasons for this are described in detail in a section below, but the show answer is that
vglrun is needed to have the application use the GPU(s) in the node.
At this point you should have a remote desktop similar to the one at the top of this page, i.e. with ParaView running inside the desktop.
When done with the VNC session close the VNC viewer window locally and cancel the SLURM job on Snellius:
The steps above are similar for using a VNC desktop on Lisa, except that the SSH tunnel should be connected to
<user>@lisa.surfsara.nl (including when using the
Preparation: hardware, software and network requirements
When using a VNC remote desktop the heavy graphics rendering work is done on the GPU(s) on the server side, i.e. on a Snellius or Lisa GPU node. The two things needed to make this work on the client side - i.e. your own workstation/laptop - are:
- Having a VNC client
- A network connection to Snellius/Lisa from your local machine, with enough bandwidth and (preferably) low latency.
The required network bandwidth depends on the resolution of the remote desktop and the content being displayed, as the image compression used is content-dependent. In general, the 100 Mbit/s or higher that most office networks provide should be good enough. When using the VNC desktop from home and/or on slower networks (or on connections having a higher latency, like WiFi) the interaction may be noticeably slower.
See the next step for information on the VNC client.
Preparation: locally install a VNC client (one time action)
To connect to a virtual desktop you need a VNC client on your local system. We suggest using TurboVNC, as it is freely available, has good performance and is available for Windows, macOS and Linux. Alternatives could be TigerVNC or TightVNC. In principle, any other VNC client should also work, but the built-in macOS viewer appears to work sub-optimally.
Step 1. Accessing the visualization/interactive node
Log into the interactive node of choice using SSH (here Snellius):
Step 2. Starting a VNC virtual desktop
For starting a VNC virtual desktop we provide a script that manages some of the steps involved, like job reservation and starting a VNC server. This script is called
vnc_desktop and is available from the module
remotevis, which is part of the 2021 (and 2020) set of software modules.
Starting a VNC session
If you load the required modules and call
vnc_desktop without arguments you can see the supported options:
The script has a single mandatory option: the amount of time you want to reserve the GPU node for. The
walltime value can either be in long form (
hh:mm:ss) or shorthand (a number followed by
m for hours/minutes, respectively). When run, the script will return the GPU node that was assigned, plus some additional VNC connection instructions. These instructions are needed in following steps.
Here, we reserve a node for 4 hours using a resolution of 1920x1080 for the remote desktop:
The default resolution of the remote desktop can be changed with the
-r option, as well as the DPI of the desktop with
-d. You can set the environment variables
VNC_DESKTOP_DPI, respectively, in your shell startup script to override the defaults. This way you only have to provide the walltime to get the VNC session that you need. Also note that once the VNC viewer has been connected (step 3 below) resizing the VNC viewer window will also resize the VNC server running on the compute node.
Step 3. Connecting the client to the VNC server
For security reasons the only way to connect to the VNC server running on a GPU node (or any other compute node on Snellius or Lisa) is through an encrypted SSH tunnel, when accessing from outside SURF.
On Lisa SSH tunneling is disabled by default. You can request it to be enabled for your login through our Service Desk
There are two ways to handle the tunneling: manually as a two-step process, or automatically if your VNC client supports this option.
Option 1: Manual SSH tunneling and connecting
Tunnel setup (Linux/macOS):
To set up the SSH tunnel under macOS or Linux, you simply have to open a terminal and run the command that was reported when you started the server with
vnc_desktop (step 3 above). It should look similar to the next line:
This tunnel forwards the port 5901 (which corresponds to VNC display
:1) on your local machine to the GPU node
gcn31, via the login node
Tunnel setup (Windows):
Microsoft Windows does not have built-in SSH capabilities, the best option is to first download the Plink tool (which is part of the PuTTY software). At http://www.chiark.greenend.org.uk/sgtatham/putty/download.html you can find the download link to
Next, start a command prompt (Windows-button +
cmd, press enter). In the command prompt, navigate to the location where you saved
plink.exe, use the command line you got when you started the server but use plink.exe instead of ssh as the command. It then should look something like this:
Logging in (all OS-es):
After setting up the SSH tunnel as above the next step is to connect the VNC client to the server, through the tunnel. For a VNC client started from the command-line the command will need to use
localhost:1 as shown here:
The same address needs to be used when making the connection from the VNC client's GUI.
When the connection to the VNC server has been made a GUI dialog will show in which you need to enter (again) your username and password:
The above VNC login dialog is an example of using the TigerVNC client. As can be seen, the dialog contains the message "This connection is not secure". This is somewhat unfortunate, as the connection actually is not insecure, i.e. it is being tunneled (and encrypted) using SSH. What the warning signifies is that there is no verification that the VNC client is connected to the server it is expecting to connect to (for example, using certificates). See also https://github.com/TigerVNC/tigervnc/issues/913.
Option 2: Automatic SSH tunneling on Linux and macOS: the
Some VNC clients, most notably TurboVNC and TigerVNC, offer a handy
-via command-line option that performs the SSH tunneling step shown above automatically. The
-via option is usually only available on the Linux and macOS versions of the VNC client. In case of the example shown above the command to start a client using the
-via option would reduce to:
The VNC client in this case will set up the appropriate SSH tunnel itself, followed by making a connection to the VNC server through the tunnel. You will be prompted to enter a password twice:
- For setting up the tunnel your regular Snellius/Lisa login password is needed, followed by
- The password for your VNC session. On Snellius this is the password you set with
vncpasswd, on Lisa you use your regular CUA password for a second time.
The VNC server will then ask for your username and password using the GUI dialog shown above.
Remote desktop ready
At this point the remote desktop should be visible in the VNC client window (not that currently the desktop environment used differs between Snellius and Lisa):
Step 4. Running a (visualization) application
Finally, within the VNC remote desktop open a terminal window to start your application. The specific command needed to start the application you want to run depends on a few factors.
You can run your application without any extra steps within the VNC desktop in these (and only these) cases:
- The VNC server is not running on a GPU node, or
- The application you want to run does not use OpenGL for rendering
When one (or both) of the above applies you can ignore the rest of this section as it is not relevant in that case.
If you want to run an OpenGL-based application - such as ParaView, VMD or Blender - and want to take advantage of the rendering power of a GPU node then the extra steps described below are needed.
In short: you need to load the appropriate
VirtualGL module and start the application with
vglrun <command>, in order for the GPU to be used by the application running in the VNC desktop. See the end of this section for a more technical explanation of why this is needed, in case you're interested.
VirtualGL is available as a module on Snellius and Lisa, called
VirtualGL, in a number of different versions. Depending on the software environment from which the (visualization) application to be used is loaded you need to load the correct version of
After loading the required
VirtualGL version you can run the application within the VNC session by prepending
vglrun before your command. For example:
This should start and show the ParaView GUI in the VNC desktop. Note the use of the NVIDIA A100 GPU in Snellius, as listed in the About ParaView window, near the mouse cursor:
Some details on VirtualGL
As promised above here's some technical background on the need for VirtualGL. Normally, an OpenGL-based visualization application running on your local (Linux) machine will issue OpenGL rendering commands to the GPU hardware and let it (together with the X server) handle the rendering and displaying of the output. Rendering on Snellius/Lisa is different, as the VNC server in which your application runs does not have access to the GPU by itself, it is dependend on the X server to provide this access. The tricky bit here is that the VNC server itself also acts as an X server, so there are actually two active:
/usr/bin/X (which has access to the GPU hardware) and TurboVNC's
Xvnc (which does not have GPU access). Furthermore, the rendered output from the real X server needs to be integrated in the VNC remote desktop.
Luckily, all this can be handled in a fairly user-friendly and transparent way by using the package VirtualGL. VirtualGL provides a way to have OpenGL applications run in a VNC session, sending a small subset OpenGL commands to the VNC X server, while the rest of the commands go directly to the GPU through the real X server. The rendered output of the application is read back at high performance and integrated in the VNC desktop view. OpenGL applications do not have to be modified in any way in this scheme, but the user does need to use an extra command when starting applications, as described above.
Slow rendering performance
If you would run the same command in the VNC desktop without
vglrun, e.g. just
paraview instead of
vglrun paraview, you might find that your visualization application is now suddenly really slow. The reason for this is that in that case slow software-based OpenGL rendering is used. So if you your OpenGL-based application performs really badly in a VNC session then you probably forgot to prepend
vglrun when you started it.
One way to check if your application is using the GPUs correctly for rendering is to see if it provides any information on the OpenGL vendor and renderer. As can be seen in the ParaView example above the OpenGL Vendor is "NVIDIA Corporation", while the OpenGL Renderer is "NVIDIA A100-....". If we run ParaView (incorrectly) without using
vglrun these values change into "Mesa/X.org" and "llvmpipe (LLVM11.1.0, 256 bits)", respectively. This indicates that software-based OpenGL rendering is being used, leaving the GPU workless.
On nodes without a GPU the
VirtualGL module should not be loaded and
vglrun should not be used. The reason for this is that
vglrun will try to run the application on a GPU, which obviously is not available. Instead, you can run the application as normal:
Step 5. Closing your desktop and SSH tunnel
When you are done working in your VNC session you can close the VNC client window running locally. Then use regular
scancel on Snellius/Lisa to end the job that was reserved on the node:
If you manually started an SSH tunnel on your local machine you can now close it.
Tips and notes
Remote desktop size
Resizing the VNC client window on your local system will also resize the remote desktop running on the compute node. This way you can somewhat control the amount of remote desktop pixel data that needs to be transfered to keep the local version of the desktop view up-to-date.
Most VNC clients have an option to make the client window full-screen, hiding the other application windows you have running locally. In this way you can transparently work with the remote desktop only. With the TigerVNC viewer press F8 to get an option menu and pick Fullscreen.
Low rendering/visualization performance
In case you feel the visualization performance of your application in the VNC desktop is low then please see the notes on the use of VirtualGL and
vglrun above. You might have missed the step of starting the application with
vglrun. In case the problem persists please contact our Servicedesk.
Currently, the desktop environments differ between Snellius (XFCE desktop environment) and Lisa (MATE desktop environment). These choices are currently fixed, but we are aiming to use the same desktop environment on both systems in the near future.
What about X forwarding?
X forwarding, as done with
ssh -X ... is a popular option run an application remotely, while displaying the GUI of the application on a different system. For a lot of applications this works fine and you can use it to run an application on Snellius or Lisa, while display the GUI on your local system, as if it was running locally.
Unfortunately, X forwarding does not work with OpenGL-based applications (which most Linux visualization applications are) for two reasons:
- When using X forwarding the OpenGL rendering commands are not executed on the server but on the client. The commands, and all data involved (which can be substantial), are literally sent to your local system and will get rendered on your local GPU, and not on the GPU in the remote server.
- Modern versions of OpenGL, which many applications use, are not supported by the GLX protocol X forwarding is using
For these reasons X forwarding is not recommended for OpenGL-based applications and we suggest you use the remote desktop approach described in this page instead.
If you have questions about using the GPU nodes for visualization, please contact us via the Servicedesk.