singularity-devbio-napari
Run jupyter kernels inside custom singularity containers on an HPC cluster that runs jupyter lab
quick start
- Start a jupyter lab session on taurus. Make sure to select at least one GPU.
- In Jupyter lab, open a terminal and type the following commands. IMPORTANT: make sure to wait until the script reports that it is done. otherwise you may end up with a broken partial singularity image in your
~/.singularity/cache
directory.git clone https://gitlab.mn.tu-dresden.de/bia-pol/singularity-devbio-napari.git cd singularity-devbio-napari ./install.sh <version>
<version>
with the version you want to install. For example:v0.1.5
- You should now see an additional button named
devbio napari
on the jupyter lab home screen. Note: you may need to reload the page first. - Klick on that button to start a jupyter notebook inside the singularity container. Note that the first command execution will take a while because of the additional time it takes to start the singularity container.
Reading and saving files on the cluster
The following directories on the cluster are accessible from within the container in a jupyter session:
- your home directory (e.g.
/home/h4/<username>
) -
/projects
for access to project spaces like/projects/p_bioimage
. -
/scratch/
this is where workspaces are created by default byws_allocate
(see the zih documentation). Workspace directories look like/scratch/ws/0/<username>-my_data
-
/beegfs/
another location for workspaces -
/tmp
for temporary files. This is kind of special, because /tmp is a local SSD on each node this means, files stored here can be accessed very fast, but not shared between nodes. Also, files stored here will be deleted when your jupyter session ends! We recommend to use thetempfile
library to create a unique temporary directory (e.g./tmp/mwe2t0
). For example:from tempfile import mkdtemp torch_home = mkdtemp() os.environ['TORCH_HOME'] = torch_home
ToDo
I would try the following approach:
- get Fabians singularity container to work for my user
- create my own singularity container with vanilla jupyter and get that to work
- get custom python modules to work within the vanilla jupyter container
- get tensorflow to work with GPU support
- get py-clesperanto to work with GPU support
- write scripts to automate adapting to different users
- set up continuous integration so that new singularity containers are created automatically when a new version tag is set
- set up a storage for singularity images
- set up remote VNC via browser. Similar to Volker Hilsensteins docker solution