Singularity on BioHPC - Guide

Last Updated: 2019-03-20 DCT

Background

Containers - What and Why?

Containers are a method of isolating software so it runs in an environment that is separate from, and can be different to, the operating system of the computer you use. For example, BioHPC compute nodes and clients currently run Red Hat Enterprise Linux 7. You may want to try out the very latest version of tensorflow, but most install instructions concentrate on Ubuntu Linux. In fact, the tensorflow developers provide ready made packages for Ubuntu, and even a container with those packages pre-installed. Using containerization you can run Tensorflow in a Ubuntu container, on top of the BioHPC node's Red Hat Linux.

Containers package together a program and all of its dependencies, so that if you have the container you can use it on any Linux system with the container system software installed. It doesn't matter whether the underlying system runs Ubuntu, RedHat or CentOS Linux - if the container system is available then the program runs identically on each, inside its container. This is great for distributing complex software with a lot of dependencies, and ensuring you can reproduce experiments exactly. If you still have the container you know you can reproduce your work.

Containers vs Virtual Machines

Containers are different than Virtual Machines. A VM emulates an entire computer, and runs a completely separate operating system from its host. If you run a VM on a BioHPC system (Oracle VirtualBox allows you to do this on workstations and thin clients), you have full control in the VM and are the administrator of your VM. This can be convenient, but when you run a process inside a VM it runs inside the emulated computer, which makes things slower, and complicates access to your BioHPC storage space.

Containers use the host system's kernel, and can access its hardware more directly. When you run a process in a container it runs on the host system, directly visible as a process on that system.

Singularity

Singularity is a container system specifically targetted at HPC users. It varies in many ways from Docker, which is the most common container system used elsewhere.

When you run jobs on the BioHPC nucleus cluster you do not have administrator priveleges, usually run jobs under a batch system (SLURM), and access large shared filesystems. Docker is not well suited for this environment, and has particular security concerns. Singularity is designed so that you can use it within SLURM jobs and it does not violate security constraints on the cluster. Singularity is able to use containers created and distributed for docker - giving access to an extremely wide range of software in the docker hub

Singularity is a very actively developed project originating at Berekely lab, adopted by many HPC centers, and now led by the startup Sylabs Inc. The website with documentation and downloads is found at: https://www.sylabs.io/singularity/.

Using Singularity - Basics with Docker Containers

You can use Singularity to run Docker containers directly. A vast range of software is now available containerized at the Docker hub http://hub.docker.com or other locations, and most can be run using Singularity.

Load the singularity module, and access the command help with:

$ module add singularity
$ singularity -h
USAGE: singularity [global options...] <command> [command options...] ...

GLOBAL OPTIONS:
    -d --debug    Print debugging information
    -h --help     Display usage summary
    -q --quiet    Only print errors
...

Run from Docker Hub

For this example we will try to use the latest tensorflow release, which is made available as a docker image at the docker hub. In this example we will try out the CPU version - but usually you'd want to use the GPU version on a GPU node in the cluster for better performance (see below).

We know that in dockerhub, tensorflow is available as the contained tensorflow/tensorflow:latest and we can directly run it with Singularity:

$ singularity run docker://tensorflow/tensorflow:latest
Docker image path: index.docker.io/tensorflow/tensorflow:latest
Cache folder set to /home2/dtrudgian/.singularity/docker
[7/7] |===================================| 100.0% 
Creating container runtime...
[I 21:44:54.232 NotebookApp] Serving notebooks from local directory: /home2/dtrudgian
[I 21:44:54.233 NotebookApp] 0 active kernels
[I 21:44:54.233 NotebookApp] The Jupyter Notebook is running at:
[I 21:44:54.233 NotebookApp] http://localhost:8888/?token=4409aebca2838008ae955a9cf5123bf8fc05fc21278f4359
[I 21:44:54.233 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 21:44:54.235 NotebookApp] No web browser found: could not locate runnable browser.
[C 21:44:54.236 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=4409aebca2838008ae955a9cf5123bf8fc05fc21278f4359

The tensorflow container starts a Jupyter notebook by default, and we can now connect to it by running a browser on the same machine and going to the URL printed out. To stop the container use <Ctrl+C> like interrupting a normal process. Singularity runs things in the container as processes on the host system, which makes it easy to use in SLURM jobs etc.

Pull from Docker Hub

Singularity, using run will download docker layers and create an image file, storing this all in $HOME/.singularity or /tmp. It's inefficient to create the Singularity image each time you run something. More usually we would pull a docker container into a Singularity image file, then run the image file:

$ singularity pull docker://tensorflow/tensorflow:latest
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
Docker image path: index.docker.io/tensorflow/tensorflow:latest
Cache folder set to /home2/dtrudgian/.singularity/docker
Importing: base Singularity environment
WARNING: Building container as an unprivileged user. If you run this container as root
WARNING: it may be missing some functionality.
Building Singularity image...
Singularity container built: ./tensorflow-latest.simg
Cleaning up...
Done. Container is at: ./tensorflow-latest.simg

# Now we can run it!

dtrudgian@Nucleus006:~
03:49 PM $ singularity run tensorflow-latest.simg 
[I 21:49:57.414 NotebookApp] Serving notebooks from local directory: /home2/dtrudgian
[I 21:49:57.414 NotebookApp] 0 active kernels
[I 21:49:57.414 NotebookApp] The Jupyter Notebook is running at:
[I 21:49:57.414 NotebookApp] http://localhost:8888/?token=cff06fd2085f88e9d3d36de7e29509df599f7b8b2e89ffc1
[I 21:49:57.414 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 21:49:57.417 NotebookApp] No web browser found: could not locate runnable browser.
[C 21:49:57.417 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=cff06fd2085f88e9d3d36de7e29509df599f7b8b2e89ffc1

Use a GPU inside the container

Singularity can bind the NVIDIA drivers and base CUDA libs into a container, so that software in a container can use the GPUs on GPU systems. Use the --nv option to accomplish this, making sure you are running a container that supports GPUs. Most Docker containers which indicate they should be run with nvidia-docker will work with the Singularity --nv option.

$ singularity pull docker://tensorflow/tensorflow:latest-gpu
WARNING: pull for Docker Hub is not guaranteed to produce the
WARNING: same image on repeated pull. Use Singularity Registry
WARNING: (shub://) to pull exactly equivalent images.
Docker image path: index.docker.io/tensorflow/tensorflow:latest-gpu
Cache folder set to /home2/dtrudgian/.singularity/docker
WARNING Not injecting bind path /etc/localtime into container - not a directory on host
WARNING Not injecting bind path /etc/hosts into container - not a directory on host
Importing: base Singularity environment
WARNING: Building container as an unprivileged user. If you run this container as root
WARNING: it may be missing some functionality.
Building Singularity image...
Singularity container built: ./tensorflow-latest-gpu.simg
Cleaning up...
Done. Container is at: ./tensorflow-latest-gpu.simg

# Run it, with a GPU available

dtrudgian@Nucleus006:~
03:53 PM $ singularity run --nv tensorflow-latest-gpu.simg 
[I 21:53:26.613 NotebookApp] Serving notebooks from local directory: /home2/dtrudgian
[I 21:53:26.613 NotebookApp] 0 active kernels
[I 21:53:26.613 NotebookApp] The Jupyter Notebook is running at:
[I 21:53:26.613 NotebookApp] http://localhost:8888/?token=0cf2e15efb5e388ef310f25eb6afaf7588ebf487f9092c05
[I 21:53:26.613 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 21:53:26.615 NotebookApp] No web browser found: could not locate runnable browser.
[C 21:53:26.615 NotebookApp] 

    Copy/paste this URL into your browser when you connect for the first time,
    to login with a token:
        http://localhost:8888/?token=0cf2e15efb5e388ef310f25eb6afaf7588ebf487f9092c05

Executing Alternative Commands

singularity run will start the default entrypoint of a container. Each Docker container will have a CMD or ENTRYPOINT in its Dockerfile, defining what to run by default. However, you can run alternative commands in the container using singularity exec.

E.g. lets check the version of ubuntu that is used for the tensorflow gpu container:

$ singularity exec --nv tensorflow-latest-gpu.simg cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"

We can also demonstrate the container can see the BioHPC filesystems:

$ singularity exec --nv tensorflow-latest-gpu.simg ls /project
BICF  GCRB              PHG   TDC    apps_database  backup_services  biophysics   flash_ftp    ngs_web     radiology    urology
CAND  InternalMedicine  SBL   TIBIR  apps_new       biohpcadmin      cellbiology  greencenter  pathology   shared
CRI   MCHGD             SCCC  apps   backup_home2   bioinformatics   cryoem       immunology   psychiatry  thunder_ftp

Or start a command line iPython session, and check tensorflow really sees the GPU:

$ singularity exec --nv tensorflow-latest-gpu.simg ipython
Python 2.7.12 (default, Nov 20 2017, 18:23:56) 
Type "copyright", "credits" or "license" for more information.

IPython 5.5.0 -- An enhanced Interactive Python.
?         -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help      -> Python's own help system.
object?   -> Details about 'object', use 'object??' for extra details.

In [1]: import tensorflow as tf

In [2]: from tensorflow.python.client import device_lib

In [3]: device_lib.list_local_devices()
2018-01-25 21:57:37.623798: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2018-01-25 21:57:38.279601: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:04:00.0
totalMemory: 7.92GiB freeMemory: 7.79GiB
2018-01-25 21:57:38.279707: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0, compute capability: 6.1)
Out[3]: 
[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 12410214554711682085, name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 7943340032
 locality {
   bus_id: 1
 }
 incarnation: 10225500224477761104
 physical_device_desc: "device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0, compute capability: 6.1"]

Yes - here on our test node we see the GTX 1080 card. On Nucleus nodes you will see Tesla K20/K40/K80/P100 card(s), depending on the node used.

We can run a tensorflow based script in the container too. Here we are going to run a script that's in my home directory, and because Singularity is mounting $HOME to the container, it's in the same place inside the container as well.

$ singularity exec --nv tensorflow-latest-gpu.simg python ~/Git/convnet-benchmarks/tensorflow/benchmark_alexnet.py
2018-01-25 22:00:45.567169: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
2018-01-25 22:00:46.088585: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce GTX 1080 major: 6 minor: 1 memoryClockRate(GHz): 1.7335
pciBusID: 0000:04:00.0
totalMemory: 7.92GiB freeMemory: 7.79GiB
2018-01-25 22:00:46.088681: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0, compute capability: 6.1)
2018-01-25 22:00:58.113794: step 10, duration = 0.023
2018-01-25 22:00:58.343796: step 20, duration = 0.023
2018-01-25 22:00:58.573784: step 30, duration = 0.023
2018-01-25 22:00:58.803694: step 40, duration = 0.023
2018-01-25 22:00:59.033756: step 50, duration = 0.023
2018-01-25 22:00:59.263883: step 60, duration = 0.023
2018-01-25 22:00:59.494038: step 70, duration = 0.023
2018-01-25 22:00:59.724000: step 80, duration = 0.023
2018-01-25 22:00:59.953918: step 90, duration = 0.023
2018-01-25 22:01:00.161138: Forward across 100 steps, 0.023 +/- 0.002 sec / batch
2018-01-25 22:01:02.112278: step 10, duration = 0.064
2018-01-25 22:01:02.755545: step 20, duration = 0.064
2018-01-25 22:01:03.400403: step 30, duration = 0.067
2018-01-25 22:01:04.042851: step 40, duration = 0.064
2018-01-25 22:01:04.684407: step 50, duration = 0.064
2018-01-25 22:01:05.327876: step 60, duration = 0.064
2018-01-25 22:01:05.970039: step 70, duration = 0.064
2018-01-25 22:01:06.612295: step 80, duration = 0.064
2018-01-25 22:01:07.254190: step 90, duration = 0.064
2018-01-25 22:01:07.831818: Forward-backward across 100 steps, 0.064 +/- 0.006 sec / batch

Shell inside a container

To get a shell inside a container there is a singularity shell command:

$ singularity shell ubuntu-latest.simg 
Singularity: Invoking an interactive shell within container...

Singularity ubuntu-latest.simg:~> date
Thu Jan 25 22:03:30 UTC 2018
Singularity ubuntu-latest.simg:~> cat /etc/debian_version 
stretch/sid
Singularity ubuntu-latest.simg:~>

Note that Singularity container images are read only...

Singularity ubuntu-latest.simg:~> touch /test
touch: cannot touch '/test': Read-only file system
Singularity ubuntu-latest.simg:~>

Also, when I run a container as the user dtrudgian I am dtrudgian inside the container too, and cannot e.g. become root to run administrative commands.

If am root when I run singularity I will be root in the container, with root access to any mounted filesystems etc - so need to be very careful~

Sandbox containers

If you really need write access to a container you can use a writable sandbox. Sandbox mode pulls the container to a folder, instead of single image file.

WARNING This is rarely appropriate. One of the advantages of using containers is to build them from a definition file (see below) which is recipe that allows automation and reproducibility in the creation of containers. You should only use sandboxes to test things out. Build production containers from definition files if possible.

$ singularity build --sandbox sandbox docker://ubuntu:latest

Now inside the sandbox/ dir on the host I can see my container filesystem:

$ ls sandbox
archive  boot  environment  home   lib    media  opt   project  run   singularity  sys  usr  work
bin      dev   etc          home2  lib64  mnt    proc  root     sbin  srv          tmp  var

I could modify the sandbox content directly, or I can run this container from the sandbox dir with the --writable option:

$ singularity exec --writable sandbox/ touch /bob
$ singularity exec --writable sandbox/ ls /bob
/bob

Note here that I could create a file in the container. Also, I wasn't root, but could write into / in the container. On normal systems / is usually owned by root with 755 permissions. Because I created my container as a user, this isn't the case:

$ ls -lah sandbox/
total 36K
drwxr-xr-x  26 dtrudgian biohpc_admin 4.0K Jan 25 16:09 .
drwx------ 172 dtrudgian biohpc_admin  12K Jan 25 16:07 ..
drwxr-xr-x   2 dtrudgian biohpc_admin   38 Jan 25 16:07 archive
drwxr-xr-x   2 dtrudgian biohpc_admin 4.0K Jan 23 16:49 bin
-rw-r--r--   1 dtrudgian biohpc_admin    0 Jan 25 16:09 bob
...

Everything is owned by my user account - there are no root owned files in my container.

Building from Docker Containers Using Definition Files

When you want to create a container for production use on the cluster, you should build a container image from a definition file. Unfortunately, building containers from a definition file requires you to be a system administrator (root) on the machine you use for building. Because BioHPC nodes and workstations are shared systems you do not have root access on them. You will need to build Singularity containers on a machine that you control.

Installing Singularity to Build Containers

Singularity runs on Linux, but there are easy options to use it for container building on Windows and Mac.

Definition Files

The definition file format for Singularity is explained in the online documentation: https://www.sylabs.io/guides/3.0/user-guide/definition_files.html

Here is an example, to work through:

bootstrap: docker
From: ubuntu:latest

%runscript
    exec echo "The runscript is the containers default runtime command!"

%files
   /home2/dtrudgian/test1        # copied to root of container
   /home2/dtrudgian/test2     /opt/test2

%environment
    VARIABLE=MEATBALLVALUE
    export VARIABLE

%labels
   AUTHOR david.trudgian@utsouthwestern.edu

%post
    apt-get update && apt-get -y install python3 git wget
    echo "The post section is where you can install, and configure your container."

bootstrap: docker tells singularity that the image we are going to create will use an existing dockerhub docker container as its base.

From: ubuntu:latest says that the base container is ubuntu:latest from dockerhub.

%runscript sets the commands that will run by default inside the container if we call it using singularity run

%files lists any files from the host that should be copied to the container image when we build it.

%environment lists any environment variable setup that should be set when running the container. It is not used when the container is built.

%labels sets metadata on the container, such as the AUTHOR which can be searched if a container is pushed to the singularity hub, or a singularity registry.

%post contains shell commands to be run when the container is built, after we have imported everything from the base docker container. This is where we can install packages in the container, etc.

Note that in %post here we are:

  • Updating the ubuntu package cache, then installing python3, git, and wget
  • Printing a message to the screen

Building an image from a definition file

You must build a definition file into a container image as root. You can use sudo, but if you are on the UTSW campus you will need to pass the httpsproxy variable and full path to singularity. To build the example definition from a file called example.def:

sudo https_proxy=$https_proxy $(which singularity) build example.simg example.def

You will see a lot of output on screen as Singularity imports the base docker container, and then runs the commands in %post

You need to chmod the container image to allow non-root users to run it:

sudo chmod 755 example.simg

Run the resulting container, and it will execute the %runscript

$ singularity run example.simg
The runscript is the containers default runtime command!

The .simg file can be copied/uploaded to BioHPC, and run directly on the Nucleus cluster, a workstation, or thin-client using the BioHPC Singularity module.

Using BioHPC Workstations to Build a Singularity Image

To build a singularity image you must have root privilege that is not allowed any of user on a HPC environment. However, it is possible to walking around of that problem. To do that, we must install a Virtual Machine that we can have root right in it and It is possible to build singularity images inside it. Instead of wasting our time to manually install a virtual machine and install singularity software inside it, we can use Vagrant and  Sylabs.io vagrant image that singularity already installed on it. To do that, you can use the method below on your BioHPC workstation.

 

$ mkdir singularity_build 
$ cd singularity_build
      # copy/create the Singulariy definition file to here 
      # Change some environmental variable against filling the home directory with vagrant/singularity image.
$ mkdir /work/biohpcadmin/s178722/.vagrant 
$ export VAGRANT_HOME=/work/biohpcadmin/s178722/.vagrant 
$ vagrant init singularity-3.5-ubuntu-bionic64 https://vagrantcloud.com/sylabs/boxes/singularity-3.5-ubuntu-bionic64/versions/20191206.0.0/providers/virtualbox.box 
$ vagrant up
$ vagrant ssh 
$ sudo sh –c "echo 'export http_proxy=http://proxy.swmed.edu:3128' > /etc/profile.d/proxy.sh" 
$ sudo sh –c "echo 'export https_proxy=http://proxy.swmed.edu:3128' >> /etc/profile.d/proxy.sh"
 $ exit
$ sudo su -
       # Build the container
$ cd /vagrant
$ singularity build julia.sif julia.def
       # Leave the VM and stop it (exit twice)
$ exit
$ exit
$ vagrant halt
$ module add singularity/3.5.3
$ cd singularity_build
$ singularity run julia.sif hello-world.jl 
Hello world!
For full tutorial, visit: https://github.com/sylabs/examples/lang/julia

 

Singularity Hub

Singularity Hub is the equivalent of Docker hub, but for native Singularity containers. The community of Singularity users may publish their definition files here, and the hub will build containers from them.

The web site is at https://singularity-hub.org

Using Singularity Hub Containers

You can pull a singularity hub container similarly to a docker container:

$ singularity pull shub://GodloveD/lolcow:latest
Progress |===================================| 100.0% 
Done. Container is at: /home2/dtrudgian/GodloveD-lolcow-master-latest.simg

 

$ singularity run GodloveD-lolcow-master-latest.simg

 ______________________________________
/ Don't tell any big lies today. Small \
\ ones can be just as effective.       /
 --------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

 

Building using Singularity Hub containers

You can create a definition file based on a Singularity Hub container, just as for a docker hub container, but using the 'shub' bootstrap.

bootstrap: shub
From: GodloveD/lolcow

You can then build the container, on a machine or VM configured for building containers (see above):

$ sudo https_proxy=$https_proxy $(which singularity) build example_shub_lolcow.simg example.def 

Using container recipe deffile: example.def
Sanitizing environment
Adding base Singularity environment to container
Progress |===================================| 100.0% 
Exporting contents of shub://GodloveD/lolcow to /tmp/.singularity-build.M2UoLa
Running post scriptlet
Found an existing definition file
Adding a bootstrap_history directory
Finalizing Singularity container
Calculating final size for metadata...
Skipping checks
Building Singularity image...
Singularity container built: example_shub_lolcow.simg
Cleaning up...

$ singularity run example_shub_lolcow.simg 
 _____________________________________
/ For courage mounteth with occasion. \
|                                     |
\ -- William Shakespeare, "King John" /
 -------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

The .simg file can now be copied/uploaded to BioHPC and run on the cluster using the BioHPC Singularity module.

Publishing containers to Singularity Hub

If you want to publish Singularity containers to external users you may consider pushing them to the Singularity Hub. The Singularity Hub works similarly to Docker Hub. Rather than uploading your containers, you provide your definition file and Singularity Hub will build them into a container, which can then be pulled and used by others.

Publishing containers to Singularity Hub is outside the scope of this guide, but is well documented online:

https://github.com/singularityhub/singularityhub.github.io/wiki

Limitations and Known Issues

  • Some Neurodebian based containers may not work as expected, e.g. the Nipype Tutorial. These expect you to be the neuro user in the container, but with Singularity your user inside the container matches your user outside. For the nipype-tutorial container we can manually run it by shell-ing into the container, manually activating the conda envornment the container delivers, and then starting the Jupyter notebook:

# Shell into the container
singularity shell -c nipype_tutorial.simg

# Activate the neuro conda environment manually
source activate neuro

# Start jupyter
python -m jupyter notebook