Info | ||
---|---|---|
| ||
Both Snellius and Lisa contain nodes that are equipped with high-end GPUs. While the main purpose of these GPUs is to perform compute tasks they can also be used for (interactive) visualization , or animation rendering or , using a virtual remote desktop with 2D/3D OpenGL rendering capabilities. The technology used to provide a remote desktop on these nodes is called VNC (Virtual Network Computing). This guide will describe how to set up a VNC virtual desktop and how connect to it from your own workstation or laptop. It also provides some more general information for using the GPUs in Snellius and Lisa for visualization purposes.
|
Note | ||
---|---|---|
| ||
The password needed to log into the VNC server on Snellius has changed, compared to Cartesius. On both Snellius and Lisa the credentials used to log into the VNC server are your regular CUA username and password. There is no longer a separate VNC password. |
Table of Contents |
---|
Introduction
A remote desktop running on a Snellius GPU node, with Paraview as application
Overview of available GPU nodes for visualization
Snellius
On Snellius 36 GPU nodes are available, gcn1
upto gcn36
. Each GPU node has 4 NVIDIA A100 GPUs. As with other compute notes, these nodes are only available through a SLURM job or reservation.
Warning | ||
---|---|---|
| ||
As the GPU nodes are a specialized resource access is not granted to them by default. The same holds for the gpu_vis partition. See below for details on how to get access. |
Depending on the E-INFRA/NWO grant you received you currently might or might not have access to the Snellius GPU nodes:
- If your granted proposal includes access to the GPU nodes then you can use them for visualization purposes without any further action.
- However, if you did not request explicit access to GPU nodes in your granted proposal then you will not have access to them, not even for visualization. In this case, you can request access to the
gpu_vis
partition (see below) through our Servicedesk. This will provide access to a subset of the Snellius GPU nodes, specifically for visualization tasks.
Note that using a Snellius GPU node, either for computation or visualization, will be charged on your account, see here for more information. Using a GPU node is more expensive in terms of SBUs than a non-GPU node.
The gpu_vis partition
Warning | ||
---|---|---|
| ||
As mentioned above, the GPU nodes and the |
The regular SLURM partitions for GPU-based computing on Snellius are less suitable for interactive visualization, as the waiting time for jobs in those partitions can be long and unpredictable. For visualization you usually want to work interactively for a relatively short time (and preferably during office hours). For this, there is a separate SLURM partition on Snellius aimed at doing interactive remote visualization called gpu_vis
.
The gpu_vis
partition on Snellius is aimed at performing relatively short (hours) interactive visualization tasks of data stored on Snellius. The gpu_vis
partition tries to improve the availability of GPU nodes for interactive visualization by using a fairly low maximum walltime. The limitations of the gpu_vis
partition are
- Maximum walltime of 1 day
- Maximum number of nodes per user (over all jobs in
gpu_vis
) is 4
Besides gpu_vis
the other GPU-related partitions on Snellius and Lisa (e.g. gpu
on Snellius, or gpu_titanrtx
on Lisa) can also be used for doing remote visualization, but they might have a long waiting time for a job to start, or might have a maximum job run time that is too limiting.
Lisa
On Lisa there are several different types of GPU nodes available, e.g. NVIDIA TITAN RTX, NVIDIA 1080 Ti, etc. See here for the most up-to-date description.
There currently are no special provisions in terms of queue setup for interactive visualization on Lisa.
Quick start
In this section the high-level steps are described to start a remote visualization session on a Snellius GPU node. The workflow on Lisa is the same, except of course to use a Lisa login node to run these commands.
The sections below give more detail on each step. Please read the detailed steps when working with the remote visualization environment on Snellius/Lisa for the first time.
Step 1 - start a VNC server
First, start a VNC server on a Snellius GPU node using our vnc_desktop
script, and note the GPU node to connect to later (gcn31
in this example):
Code Block |
---|
# On local laptop/workstation workstation$ ssh <user>@snellius.surf.nl <enter password to login> # On Snellius: snellius paulm@int3 20:51 ~$ module load 2022 snellius paulm@int3 20:51 ~$ module load remotevis/git # Reserve a node for 2 hours and start a VNC desktop on it snellius paulm@int3 20:51 ~$ vnc_desktop 2h Reserving a GPU node (partition "gpu_vis") SLURM job ID is 33073 Waiting for job to start running on a node We got assigned node gcn31 Waiting until VNC server is ready for connection... VNC server gcn31:1 ready for use! ... |
Step 2 - connect with the VNC client
Now that a node was reserved and a VNC server started on it the next step is to use the VNC viewer on your local workstation/laptop and connect to the VNC server on the node. If you have a VNC viewer that supports the -via
option (see below) then setting up an SSH and tunnel connecting to the VNC server on the reserved node can be done in one step:
Code Block |
---|
workstation$ vncviewer -via <user>@snellius.surf.nl gcn31:1 <enter SSH password to connect to Snellius> <enter Snellius username & password to log into VNC server> |
Otherwise, first set up the SSH tunnel manually:
Code Block |
---|
# Linux/macOS workstation$ ssh -L 5901:gcn31:5901 <user>@snellius.surf.nl # Windows C:\> plink.exe -L 5901:gcn31:5901 <user>@snellius.surf.nl |
Then start the VNC viewer locally and connect through the local endpoint of the tunnel:
Code Block |
---|
# Linux/macOS workstation$ vncviewer localhost:1 # Windows Start -> VNC viewer -> localhost:1 |
Note |
---|
On Lisa SSH tunneling is disabled by default. You can request it to be enabled for your login through our Service Desk |
Step 3 - start your application
The final is to start the visualization application of interest. For this, open a terminal in the remote desktop (i.e. inside the VNC viewer). On Lisa using the top menu: Applications -> System Tools -> MATE Terminal
. On Snellius using the Applications
→ Terminal Emulator
menu item.
When running an application inside the VNC desktop on a GPU node it is important to load module VirtualGL
and start your application with vglrun
. The reasons for this are described in detail in a section below, but the show answer is that vglrun
is needed to have the application use the GPU(s) in the node.
Code Block |
---|
# To be used in a terminal window in the VNC remote desktop VNC$ module load 2022 VNC$ module load VirtualGL/3.0.1-GCCcore-11.3.0 VNC$ module load ParaView/5.10.1-foss-2022a-mpi VNC$ vglrun paraview |
At this point you should have a remote desktop similar to the one at the top of this page, i.e. with ParaView running inside the desktop.
When done with the VNC session close the VNC viewer window locally and cancel the SLURM job on Snellius:
Code Block |
---|
snellius paulm@int3 20:55 ~$ scancel 9084797 |
Lisa
The steps above are similar for using a VNC desktop on Lisa, except that the SSH tunnel should be connected to <user>@lisa.surfsara.nl
(including when using the -via
option).
Detailed steps
Preparation: hardware, software and network requirements
When using a VNC remote desktop the heavy graphics rendering work is done on the GPU(s) on the server side, i.e. on a Snellius or Lisa GPU node. The two things needed to make this work on the client side - i.e. your own workstation/laptop - are:
- Having a VNC client
- A network connection to Snellius/Lisa from your local machine, with enough bandwidth and (preferably) low latency.
The required network bandwidth depends on the resolution of the remote desktop and the content being displayed, as the image compression used is content-dependent. In general, the 100 Mbit/s or higher that most office networks provide should be good enough. When using the VNC desktop from home and/or on slower networks (or on connections having a higher latency, like WiFi) the interaction may be noticeably slower.
See the next step for information on the VNC client.
Preparation: locally install a VNC client (one time action)
To connect to a virtual desktop you need a VNC client on your local system. We suggest using TurboVNC, as it is freely available, has good performance and is available for Windows, macOS and Linux. Alternatives could be TigerVNC or TightVNC. In principle, any other VNC client should also work, but the built-in macOS viewer appears to work sub-optimally.
Step 1. Accessing the visualization/interactive node
Log into the interactive node of choice using SSH (here Snellius):
Code Block |
---|
workstation 21:20:~$ ssh paulm@snellius.surf.nl (paulm@snellius.surf.nl) Password: Last login: Tue Oct 26 09:14:27 2021 from 145.100.19.163 snellius paulm@int3 21:20 ~$ |
Step 2. Starting a VNC virtual desktop
For starting a VNC virtual desktop we provide a script that manages some of the steps involved, like job reservation and starting a VNC server. This script is called vnc_desktop
and is available from the module remotevis
, which is part of the 2022 (and 2021) set of software modules.
Starting a VNC session
If you load the required modules and call vnc_desktop
without arguments you can see the supported options:
Code Block |
---|
snellius paulm@int3 21:23 ~$ module load 2022 snellius paulm@int3 21:24 ~$ module load remotevis/git snellius paulm@int3 21:24 ~$ vnc_desktop Usage: vnc_desktop [options] <walltime> Options: -h, --help show this help message and exit -p PARTITION, --partition=PARTITION Job partition (default: "gpu_vis") -r RESOLUTION, --resolution=RESOLUTION Remote desktop resolution (default: "1240x900") -d DPI, --dpi=DPI Remote desktop DPI value (default: "90") -v, --verbose Be verbose -n, --dry-run Don't actually submit a SLURM job, only output the job script and then exit -l, --local Don't use globally installed scripts (for debugging) |
The script has a single mandatory option: the amount of time you want to reserve the GPU node for. The walltime
value can either be in long form (hh:mm:ss
) or shorthand (a number followed by h
/m
for hours/minutes, respectively). When run, the script will return the GPU node that was assigned, plus some additional VNC connection instructions. These instructions are needed in following steps.
Here, we reserve a node for 4 hours using a resolution of 1920x1080 for the remote desktop:
Code Block |
---|
snellius paulm@int1 21:24 ~$ vnc_desktop -r 1920x1080 4h Reserving a GPU node (partition "gpu_vis") SLURM job ID is 33095 Waiting for job to start running on a node We got assigned node gcn31 Waiting until VNC server is ready for connection... VNC server gcn31:1 ready for use! ... snellius paulm@int1 21:25 ~$ |
The default resolution of the remote desktop can be changed with the -r
option, as well as the DPI of the desktop with -d
. You can set the environment variables VNC_DESKTOP_RESOLUTION
and VNC_DESKTOP_DPI
, respectively, in your shell startup script to override the defaults. This way you only have to provide the walltime to get the VNC session that you need. Also note that once the VNC viewer has been connected (step 3 below) resizing the VNC viewer window will also resize the VNC server running on the compute node.
Note |
---|
If, in this step, you get a message such as |
Step 3. Connecting the client to the VNC server
For security reasons the only way to connect to the VNC server running on a GPU node (or any other compute node on Snellius or Lisa) is through an encrypted SSH tunnel, when accessing from outside SURF.
Note |
---|
On Lisa SSH tunneling is disabled by default. You can request it to be enabled for your login through our Service Desk |
There are two ways to handle the tunneling: manually as a two-step process, or automatically if your VNC client supports this option.
Option 1: Manual SSH tunneling and connecting
Tunnel setup (Linux/macOS):
To set up the SSH tunnel under macOS or Linux, you simply have to open a terminal and run the command that was reported when you started the server with vnc_desktop
(step 3 above). It should look similar to the next line:
Code Block |
---|
workstation$ ssh -L 5901:gcn31:5901 paulm@snellius.surf.nl |
This tunnel forwards the port 5901 (which corresponds to VNC display :1
) on your local machine to the GPU node gcn31
, via the login node snellius.surf.nl
.
Tunnel setup (Windows):
Microsoft Windows does not have built-in SSH capabilities, the best option is to first download the Plink tool (which is part of the PuTTY software). At http://www.chiark.greenend.org.uk/sgtatham/putty/download.html you can find the download link to plink.exe
.
Next, start a command prompt (Windows-button + r
, type cmd
, press enter). In the command prompt, navigate to the location where you saved plink.exe
, use the command line you got when you started the server but use plink.exe instead of ssh as the command. It then should look something like this:
Code Block |
---|
plink.exe -L 5901:gcn31.snellius.surfs.nl:5901 paulm@snellius.surf.nl |
Logging in (all OS-es):
After setting up the SSH tunnel as above the next step is to connect the VNC client to the server, through the tunnel. For a VNC client started from the command-line the command will need to use localhost:1
as shown here:
Code Block |
---|
# Run the VNC viewer on your local machine workstation$ vncviewer localhost:1 <in the VNC login box enter Snellius/Lisa username + password > |
The same address needs to be used when making the connection from the VNC client's GUI.
When the connection to the VNC server has been made a GUI dialog will show in which you need to enter (again) your username and password:
Info | ||
---|---|---|
| ||
The above VNC login dialog is an example of using the TigerVNC client. As can be seen, the dialog contains the message "This connection is not secure". This is somewhat unfortunate, as the connection actually is not insecure, i.e. it is being tunneled (and encrypted) using SSH. What the warning signifies is that there is no verification that the VNC client is connected to the server it is expecting to connect to (for example, using certificates). See also https://github.com/TigerVNC/tigervnc/issues/913. |
Option 2: Automatic SSH tunneling on Linux and macOS: the -via
option
Some VNC clients, most notably TurboVNC and TigerVNC, offer a handy -via
command-line option that performs the SSH tunneling step shown above automatically. The -via
option is usually only available on the Linux and macOS versions of the VNC client. In case of the example shown above the command to start a client using the -via
option would reduce to:
Code Block |
---|
workstation$ vncviewer -via paulm@snellius.surf.nl gcn31:1 |
The VNC client in this case will set up the appropriate SSH tunnel itself, followed by making a connection to the VNC server through the tunnel. You will be prompted to enter a password twice:
- For setting up the tunnel your regular Snellius/Lisa login password is needed, followed by
- The password for your VNC session. On Snellius this is the password you set with
vncpasswd
, on Lisa you use your regular CUA password for a second time.
For example:
Code Block |
---|
workstation$ vncviewer -via paulm@snellius.surf.nl gcn31:1 TigerVNC Viewer 64-bit v1.11.0 Built on: 2020-11-24 19:41 Copyright (C) 1999-2020 TigerVNC Team and many others (see README.rst) See https://www.tigervnc.org for information on TigerVNC. (paulm@snellius.surf.nl) Password: <enter CUA password to access Snellius/Lisa> Tue Oct 26 21:40:03 2021 DecodeManager: Detected 4 CPU core(s) DecodeManager: Creating 4 decoder thread(s) CConn: Connected to host localhost port 59013 CConnection: Server supports RFB protocol version 3.8 CConnection: Using RFB protocol version 3.8 CConnection: Choosing security type VeNCrypt(19) CVeNCrypt: Choosing security type TLSVnc (258) Tue Oct 26 21:40:05 2021 CConn: Using pixel format depth 24 (32bpp) little-endian rgb888 CConnection: Enabling continuous updates |
The VNC server will then ask for your username and password using the GUI dialog shown above.
Remote desktop ready
At this point the remote desktop should be visible in the VNC client window (not that currently the desktop environment used differs between Snellius and Lisa):
Step 4. Running a (visualization) application
Finally, within the VNC remote desktop open a terminal window to start your application. The specific command needed to start the application you want to run depends on a few factors.
Note |
---|
You can run your application without any extra steps within the VNC desktop in these (and only these) cases:
When one (or both) of the above applies you can ignore the rest of this section as it is not relevant in that case. |
If you want to run an OpenGL-based application - such as ParaView, VMD or Blender - and want to take advantage of the rendering power of a GPU node then the extra steps described below are needed.
In short: you need to load the appropriate VirtualGL
module and start the application with vglrun <command>
, in order for the GPU to be used by the application running in the VNC desktop. See the end of this section for a more technical explanation of why this is needed, in case you're interested.
VirtualGL is available as a module on Snellius and Lisa, called VirtualGL
, in a number of different versions. Depending on the software environment from which the (visualization) application to be used is loaded you need to load the correct version of VirtualGL
:
Code Block |
---|
Environment Application toolchain VirtualGL module version ----------- ----------------------- ------------------------------ 2022 (all) VirtualGL/3.0.1-GCCcore-11.3.0 2021 (all) VirtualGL/2.6.5-GCCcore-10.3.0 2020 (all) VirtualGL/2.6.4-GCCcore-9.3.0 2019 (all) VirtualGL/2.6.4-GCCcore-8.3.0 |
After loading the required VirtualGL
version you can run the application within the VNC session by prepending vglrun
before your command. For example:
Code Block |
---|
# Note that you need to enter these commands in a terminal running *inside* the VNC desktop VNC$ module load 2022 VNC$ module load VirtualGL/3.0.1-GCCcore-11.3.0 VNC$ module load ParaView/5.10.1-foss-2022a-mpi VNC$ vglrun paraview |
This should start and show the ParaView GUI in the VNC desktop. Note the use of the NVIDIA A100 GPU in Snellius, as listed in the About ParaView window, near the mouse cursor:
Some details on VirtualGL
As promised above here's some technical background on the need for VirtualGL. Normally, an OpenGL-based visualization application running on your local (Linux) machine will issue OpenGL rendering commands to the GPU hardware and let it (together with the X server) handle the rendering and displaying of the output. Rendering on Snellius/Lisa is different, as the VNC server in which your application runs does not have access to the GPU by itself, it is dependend on the X server to provide this access. The tricky bit here is that the VNC server itself also acts as an X server, so there are actually two active: /usr/bin/X
(which has access to the GPU hardware) and TurboVNC's Xvnc
(which does not have GPU access). Furthermore, the rendered output from the real X server needs to be integrated in the VNC remote desktop.
Luckily, all this can be handled in a fairly user-friendly and transparent way by using the package VirtualGL. VirtualGL provides a way to have OpenGL applications run in a VNC session, sending a small subset OpenGL commands to the VNC X server, while the rest of the commands go directly to the GPU through the real X server. The rendered output of the application is read back at high performance and integrated in the VNC desktop view. OpenGL applications do not have to be modified in any way in this scheme, but the user does need to use an extra command when starting applications, as described above.
Tip | ||
---|---|---|
| ||
If you would run the same command in the VNC desktop without One way to check if your application is using the GPUs correctly for rendering is to see if it provides any information on the OpenGL vendor and renderer. As can be seen in the ParaView example above the OpenGL Vendor is "NVIDIA Corporation", while the OpenGL Renderer is "NVIDIA A100-....". If we run ParaView (incorrectly) without using |
Warning | ||
---|---|---|
On nodes without a GPU the
|
Step 5. Closing your desktop and SSH tunnel
When you are done working in your VNC session you can close the VNC client window running locally. Then use regular scancel
on Snellius/Lisa to end the job that was reserved on the node:
Code Block |
---|
# End the VNC job snellius paulm@int3 21:56 ~$ scancel 33073 |
If you manually started an SSH tunnel on your local machine you can now close it.
Tips and notes
Remote desktop size
Resizing the VNC client window on your local system will also resize the remote desktop running on the compute node. This way you can somewhat control the amount of remote desktop pixel data that needs to be transfered to keep the local version of the desktop view up-to-date.
Full-screen desktop
Most VNC clients have an option to make the client window full-screen, hiding the other application windows you have running locally. In this way you can transparently work with the remote desktop only. With the TigerVNC viewer press F8 to get an option menu and pick Fullscreen.
Low rendering/visualization performance
In case you feel the visualization performance of your application in the VNC desktop is low then please see the notes on the use of VirtualGL and vglrun
above. You might have missed the step of starting the application with vglrun
. In case the problem persists please contact our Servicedesk.
Desktop environment
Currently, the desktop environments differ between Snellius (XFCE desktop environment) and Lisa (MATE desktop environment). These choices are currently fixed, but we are aiming to use the same desktop environment on both systems in the near future.
What about X forwarding?
X forwarding, as done with ssh -X ...
is a popular option run an application remotely, while displaying the GUI of the application on a different system. For a lot of applications this works fine and you can use it to run an application on Snellius or Lisa, while display the GUI on your local system, as if it was running locally.
Unfortunately, X forwarding does not work with OpenGL-based applications (which most Linux visualization applications are) for two reasons:
- When using X forwarding the OpenGL rendering commands are not executed on the server but on the client. The commands, and all data involved (which can be substantial), are literally sent to your local system and will get rendered on your local GPU, and not on the GPU in the remote server.
- Modern versions of OpenGL, which many applications use, are not supported by the GLX protocol X forwarding is using.
For these reasons X forwarding is not recommended for OpenGL-based applications and we suggest you use the remote desktop approach described in this page instead.
Support
If you have questions about using the GPU nodes for visualization, please contact us via the Servicedesk.