Notice
The examples and use cases described here are intended to show the different ways SURF Research Access Management can be used and connected to application. These examples and use cases are not always validated by SURF.
SURF Research Access Management can make authentication and authorization of web-based applications easy, as long as the application supports SAML or OIDC. The challenge often lies in configuring or even adapting the application to make the translation from the user's collaboration membership and group membership in SRAM to roles and permissions in the application. To make this translation, metadata has to be exchanged first, which in some cases is not instantaneous.
In case the application is started on-demand for a user, it needs to be accessible right away.
Jupyter Notebook
Jupyter Notebook is an application that is started on-demand when a user logs in, and hence requires instant information about roles and permissions for that user. In its portal, applications are started at the push of a button and are available to the user right away. To cater for these users, an authentication application that is already connected to SRAM can be placed before the application. The exchange of metadata, and configuring Jupyter Notebook as SAML Service Provider or OIDC Relaying Party is not necessary. Jupyterhub, a multi-user version of Jupyter Notebook, can be configured to automatically trust all traffic from this application and recognise the user name.
Concept
When a user opens the application, the authentication application (left) will require authentication with SRAM. A soon as the user is authenticated and the virtual machine that has just been spanwed is approached on its URL, a Jupyter user is automatically created with the right permissions and the user can get to work.
Architecture
Apache Reverse Proxy
The authenticating application as described above, can be Apache with mod_auth_openidc (often available as a standard package). This Apache module can be configured as Apache require condition and will perform the OIDC heavy lifting. Once authenticated, the user is available in the REMOTE_USER
environment variabele and the user can be passed on to the back end server by the reverse proxy using an HTTP header.
JupyterHub and Jupyter Notebook
Jupyter Notebooks are standalone python processes that provide a session to one user only. JupyterHub expands the capability to support multiple users and sessions on one server. It provides an authentication Class to authenticate users and then starts a Notebook by means of a Spawner Class. These two Classes can be chosen to automatically login the user based on the HTTP header that Apache has added to the request and then start a Notebook.
The Jupyterhub REMOTE_USER Authenticator is such a Class and the Spawner that is capable to start a Notebook for (internally) unknown users is the SimpleLocalProcessSpawner (integrated into JupyterHub core).
Installation and configuration
A demo was available with a ReverseProxy created by a docker-compose recipe that starts 2 JupyterHubs and 2 EtherPad applications on 4 backend machines. Via Apache ReverseProxy the applications can be reached transparently on URL paths /hub3
, /hub4
, /ep5
and /ep6
.
In the Apache reverseproxy.conf
file the lines beginning with OIDC*
manage the OIDC connection and the two lines under # Authentication Header
make sure the right HTTP headers are sent to the backend server.
The jupyterhub1
and jupyterhub2
files configure the 1st and 2nd JupyterHub servers (172.21.11.3
en 172.21.11.4
, hence named hub3 en hub4). The authenticator_class is configured to RemoteUserLocalAuthenticator and the Spawner is SimpleLocalProcessSpawner. This spawner is unsafe in a normal JupyterHub environment and is only provided for test purposes. This Spawner is able to start Notebooks for users unknown on the server on which JupyterHub is running, exactly the situation of this setup.
The demo can be used by putting the right domain of a test server in the .env
file, configuring the Apache reverseproxy.conf
file (for OIDC or just Basic Auth) and then starting the server using docker-compose up
.