This version is obsolete but still contains some material on the rationale of containers in scientific computing.
To learn how to set up and use containers in the UIBK HPC environment, please go to the updated Singularity 3.8+ description.
Version Info
This description is about Singularity 2.4 or later. For information on previous versions, please see the description of 2.3 and earlier. Users are strongly encouraged to install or keep their installations at the latest 2.x release.
Please note that containers created by Singularity 3.x cannot be run under version 2.x. Support for 3.x on our clusters is under consideration as of January 2019.
Links
- Singularity Home Page
(Note: Singularity development has been spun off to a company. We shall adjust the links to their new web site once their documentation structure appears to be reasonably stable.) - Singularity releases on Github
- Official Docker Repositories
Preliminaries
What are Containers?
Containers, in the present context, are a technique to run software in a user defined environment on a host system. They are a lightweight alternative to virtual machines. A container consists of an image of a Linux root file system, including programs, libraries, software and necessary static data, but not the kernel.
When a container is executed on a host, programs running in the container (guest) will see the container's root file system instead of that of the host. This allows you to e.g. run an Ubuntu container on a CentOS host provided that the host's kernel is sufficiently recent to support the container's system calls.
Unlike a virtual machine, the guest will not boot its own kernel, but programs will directly communicate with the host's kernel. So there is no performance penalty for running HPC programs inside a container.
Technically, the replacement of the host's file system by that of the guest is achieved by manipulating Linux name spaces. Typical container solutions also isolate other namespaces such as process IDs or networks. These features are neither necessary nor beneficial in HPC and will not be discussed here.
What differences exist between popular container systems?
LXC and Docker are widespread container systems for Linux. Both have an execution model that resembles that of a virtual machine: starting a container will initiate independent server processes (including, in the case of LXC, a simplified boot process), and users then can attach to or terminate running containers using frontend commands.
Docker has become a de-facto standard for containerization thanks to its ease of use on personal workstations and the availability of a multitude of pre-built containers for many application areas. However, end users need root privileges to use it, it is not easy to operate on large cluster systems, and, due to its client-server execution model it is practically impossible to integrate with batch systems. For these and other reasons, Docker is not a suitable solution for running containers in an HPC environment.
Singularity
Singularity is a container runtime environment developed by the Lawrence Berkeley Labs specifically for use on HPC clusters. In its standard configuration, it only virtualizes the root file system but allows full access to the host's other resources, including your HOME and SCRATCH directories, and network infrastructure. Singularity starts container processes as children of the calling shell. So, integration into batch systems and MPI is straightforward.
In contrast to Docker, where the command lines of programs to be run at container startup must be specified when the container is created, Singuarity allows existing containers to be started with arbitrary commands, making multiple invocations of one container e.g. for parameter studies as simple as multiple runs of a normal UNIX command.
Singularity is integrated with the Docker infrastructure. You may download and modify any Docker container using Singularity on your personal Linux computer (or virtual machine) and ship the resulting container to the HPC system for execution. Alternatively, you may build containers from scratch using any of the popular Linux distributions. You need root privileges locally (i.e. on your workstation) to build and change containers.
Aside from a Singularity installation provided by the system administrator, running Singularity containers on an HPC system requires no special user privileges. Your Singularity containers will run as normal processes under your user ID.
Why Containers in Scientific Computing?
There are several limitations with the traditional ways of software installation:
- In the interest of stability, HPC installations are typically conservative in their choice of OS distributions and implementation of new features.
- Many users need very recent software features, and there is an increasing number of software products with very diverse, specific, and often contraditory prerequisite requirements (Dependency Hell, which is an issue not only with large multiuser systems but even with personal workstations.)
- Migrating to another HPC system requires reinstallation and testing of all necessary software. Pre-installed software may be at different levels from the previously used system, increasing the porting effort.
- Differing software versions raise questions of reprodicibility of scientific results, contributing to a phenomenon known as the Replication Crisis.
Software containers address all these problems. They can be used to bundle software with all needed prerequisites into an isolated environment, which can be executed on arbitrary (reasonably recent) Linux machines independently of locally installed software. Porting your software to a new machine then simply means copying your container to that new machine (and persuading their admin staff to install Singularity).
Singularity Workflow
The following is a simplified example to help you begin using Singularity containers. For more details, visit the Singularity documentation pages and in particular the section on best practices Singularity Flow.
To create and customize a container, you need root privileges. You need a personal Linux PC to do this. If you have a Windows PC or OS-X machine, you need a Linux virtual machine on your PC (the Windows 10 Ubuntu subsystem does not work for this purpose). We strongly recommend using a virtual machine also for workstations running Linux to protect against errors in container setup scripts. We recommend Virtualbox unless you already have another virtualization installed.
Start by setting up and testing your container on your workstation. Once you are satisfied with the container, copy it to an HPC-cluster and run your programs.
In more detail:
If you have a previous version of Singularity installed, change to its source directory and issue
sudo make uninstall
before beginning to install the new version or deleting the old installation directory.-
One-time preparation:
Install prerequisite software
- DEB systems such as Debian and Ubuntu:
sudo apt-get install python gcc make libarchive-dev squashfs-tools
- RPM systems such as CentOS, RedHat:
sudo yum install python gcc make libarchive-devel squashfs-tools
- DEB systems such as Debian and Ubuntu:
Download and install the latest release of Singularity on your Linux PC (or virtual machine).
VERSION=2.6.1 wget https://github.com/singularityware/singularity/releases/download/$VERSION/singularity-$VERSION.tar.gz tar xvf singularity-$VERSION.tar.gz cd singularity-$VERSION ./configure --prefix=/usr/local make sudo make install
Find more details here. Do not use the -sysconfdir option in the configure stage unless you have special reasons to do so. Keep the source directory so you can later cleanly uninstall an old version.
On your PC, download and create a writable copy of a container in a sandbox directory hierarchy. Example: get latest version of Ubuntu.
sudo singularity build --sandbox myubuntu docker://ubuntu:latest
This will create a directory hierarchy in your working directory, containing a complete copy of the docker image. For available Docker images, see the Official Docker Repositories.Modify your container: first start a shell within the container
sudo singularity shell --writable myubuntu
and then costumize your installation, recording all steps for later automation:Install software in the container using the guest's packet manager, download and install software from sources, copy necessary static data into the guest.
Typically your first actions in a new Ubuntu container should be:apt-get update apt-get -y upgrade apt-get -y install vim less gcc make apt-get -y autoremove
Use apt-get instead of apt because you will later want to automate these steps.For installation of user data and software, use a directory which is unlikely to collide with other directories, e.g. /data.
mkdir /data
Note that there currently seem to be no established structure and naming conventions for user defined contents in containers.
For optimal integration with the UIBK installation (and many other HPC sites), create a directory /scratch which will serve as a mount point of the HPC $SCRATCH directory.
mkdir /scratch
Note that the host's $HOME directory is mounted as $HOME in the guest, so any modifications to $HOME will happen outside the guest, which is usually undesirable.
Note also that omitting the --writable option will cause all changes to be silently lost after you terminate the shell.
Repeat and test until you are satisfied. Convert the sandbox into an immutabe container image
sudo singularity build myubuntu.simg myubuntu
Copy this image to an HPC cluster
scp myubuntu.simg user@host.domain:/scratch/user
Login to the cluster and use the container
ssh user@host.domain cd /scratch/user module load singularity/2.x singularity exec myubuntu.simg command [arg ...]
Note that, in contrast to other software products, we do not support older versions of Singularity on our systems.
After testing, use your notes to prepare a build file and completely automate the build process. Note that software distributions may change, so keeping a buildfile is a good starting point for new, similar containers. Conversely, the immutable container image provides the vehicle for portability and reproducibility.
Singularity Usage Notes
The following information is specific to the installation on our HPC clusters.
- Before using Singularity, issue
module load singularity/2.x
We aim at keeping Singularity current and will disable older versions as soon as the newest version is installed. - Our setup includes automatic mounting of $HOME (which is a default) and $SCRATCH (local setup). For the latter to work, your container must contain the directory /scratch, which will be used as the mount point.
- Since your HOME directory is mapped into the container's $HOME, your container will use settings which you made in $HOME (such as .bashrc running commands meant to be run on the host and may not even exist in the container's context) and might inadvertently modify files in $HOME. If you do not like this, run your container with the -H option set to a different directory.
- Likewise, make sure that software which you want to install into your container will not go to $HOME, or the container will not function correctly on other systems.
- You can run MPI code using Singularity, provided that the MPI versions inside the container and on the host match exactly.
- Ubuntu Bionic requires a more recent kernel than is installed on Leo3 (which is still running CentOS 6).
- Native OS distributions (e.g. ubuntu:latest) from Docker Hub are intentionally kept very lightweight, missing many features of fully configured desktop OS installations. You will typically need to install more prerequisites than described in software installation instructions.
- Warning: Make sure that you do not overwrite an existing Squashfs image on the HPC system while there are still active jobs accessing such image. Else, your jobs will malfunction, and our sysadmins will be flooded with error messages.
- Warning: Carefully inspect build files from external sources for unintended effects. In particular, the %setup section of the build file is run as root on your system with no protection.
- As of January 2019, we are studying potential benefits of using Singularity Instances for MPI jobs and job arrays. Instances may be useful to avoid buffer cache contention when multiple Singularity executions of large, data-intensive containers are to be run simultaneously on a single host. However, the required setup is non-trivial, involves careful management of limited OS resources (loopback devices), and thus is currently not recommended for general use.
Case Study: Using MPI with Infiniband on the HPC clusters LEO3, LEO3e, LEO4 (LeoX)
Version info: The following procedure has been adapted for Singularity 2.4 and later.
Singularity containers have access to /dev files, so MPI programs running in containers can use the Infiniband transport layer of OpenMPI. To run MPI programs in Singularity containers, we need the following prerequisites:
- OpenMPI versions used in the host and the container must match exactly
- Infiniband utilities and drivers must be installed in the container
What follows are - as an example - steps to get the Ohio State University MPI benchmark to run on LeoX in an Ubuntu container. Modify this workflow according to your needs. This is also an example for automating the setup of a slightly nontrivial container using the bootstrap method.
On your Linux workstation
- Download and install Singularity. See above.
- Prepare the following build file mpitest.build
Bootstrap: docker From: ubuntu:bionic %setup # run in host as root after bootstrap mkdir $SINGULARITY_ROOTFS/scratch mkdir $SINGULARITY_ROOTFS/data # copy data into container here # cp -a mpitest-data/* $SINGULARITY_ROOTFS/data %post # this section is executed within the container # add all commands needed to setup software set -ex apt-get update apt-get -y upgrade ## install some general purpose packages and compiler environment apt-get -y install apt-utils apt-file vim less gcc g++ make wget openssh-client ## install openmpi and support utilities and drivers for infiniband ## as of September 2018, openmpi 2.1.1 is installed for Ubuntu 18.04 (bionic) ## since you will be using the host-side mpirun, you need the same version on the host apt-get -y install openmpi-bin libopenmpi-dev infiniband-diags ibverbs-utils libibverbs-dev apt-get -y install libcxgb3-1 libipathverbs1 libmlx4-1 libmlx5-1 libmthca1 libnes1 # apt-file update ## install Ohio State University MPI benchmarks cd /data wget http://mvapich.cse.ohio-state.edu/download/mvapich/osu-micro-benchmarks-5.3.2.tar.gz tar xf osu-micro-benchmarks-5.3.2.tar.gz cd osu-micro-benchmarks-5.3.2 ./configure --prefix=/data CC=$(which mpicc) CXX=$(which mpicxx) make make install # note: using /data as prefix for demonstration purposes; normally # the installation prefix is the standard /usr/local %test echo "Define any test commands that should be executed after container has been" echo "built. This scriptlet will be executed from within the running container" echo "as the root user. Pay attention to the exit/return value of this scriptlet" echo "as any non-zero exit code will be assumed as failure" exit 0 %runscript echo "Define actions for the container to be executed with the run command or" echo "when container is executed." %startscript echo "Define actions for container to perform when started as an instance." %labels # HELLO MOTO # KEY VALUE %files # this section is executed too late to be useful # use the %setup section instead # /path/on/host/file.txt /path/on/container/file.txt # relative_file.txt /path/on/container/relative_file.txt %environment # define environment variables to be set at container shell / exec / run ## export NAME=value # international support is not installed by default export LANG=C # If you used configure --prefix=/data and the software was # installed to (prefix)/bin and (prefix)/lib (which is not the case # with the OSU benchmark), you would need to extend the search paths. ## export PATH=/data/bin:$PATH ## export LD_LIBRARY_PATH=/data/lib
This build file will pull a basic Ubuntu 18.04 image from the Docker hub and will install some utilities, the GNU compiler, and MPI support. It will download (from the OSU web server) and install the OSU Benchmark in the container. Note that the OS running on LeoX is CentOS, but the container OS used in this example is Ubuntu.
Note: singularity help build will output a template to
stdout which can be used to create your own build file. Note that
the percent signs of the section delimiters must be in column
one.
- Still on your Linux workstation, build the container into a
sandbox subdirectory named
mpitest using above build file.
You need to be root to do this. Depending on your network
connection, the build process will take several minutes. Watch the output.
If errors occur, remove or rename
the directory, correct the build file, and
start again.
buildlog=build-$(date '+%Y%m%d-%H%M%S').log sudo singularity build --sandbox mpitest mpitest.build > $buildlog 2>&1 & less $buildlog
- After successful build, convert the sandbox directory into an
immutable squashfs image:
sudo singularity build mpitest.simg mpitest
- Copy the container image to your
LeoX scratch directory
scp mpitest.simg cXXXyyyy@leoX:/scratch/cXXXyyyy/mpitest.simg
Note: This process allows full flexibility for building and
converting containers between various formats.
E.g. between steps 3 and 4 above you may modify the container using
sudo singularity shell -w mpitestConversely, after all changes have been integrated into the build file and you are satisfied with the results, you may build directly into a squashfs image:
sudo singularity build mpitest.simg mpitest.build
On LeoX
- Log on to LeoX and cd to your $SCRATCH directory.
- Issue module load singularity/2.x
- Check for your container image ls -l mpitest.simg
- First functional test:
Test local shared memory execution of the MPI benchmark. As of
September
2018, Ubuntu 18.04 LTS (bionic)
comes with OpenMPI 2.1.1, so this is what we need to load
in this test.
module load openmpi/2.1.1 singularity/2.x mpirun -np 2 singularity exec mpitest.simg /data/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw
You should get bandwidths up to approx. 4 GB/s - Second functional test:
Test multihost execution under the batch system.
Prepare an SGE batch job
#!/bin/bash #$ -N singtest #$ -pe openmpi-1perhost 2 #$ -cwd module load openmpi/2.1.1 singularity/2.x mpirun -np 2 singularity exec mpitest.simg /data/libexec/osu-micro-benchmarks/mpi/pt2pt/osu_bw
Note that the singularity commands are placed on the execution nodes using the host's mpirun command. In the output file, you should see bandwidths up to approx. 6 GB/s.
Sample output for OSU MPI benchmark
$ cat singtest.o592416 # OSU MPI Bandwidth Test v5.3.2 # Size Bandwidth (MB/s) 1 3.70 2 7.28 4 14.65 8 28.99 [...] 262144 6197.92 524288 5913.92 1048576 6021.13 2097152 5964.13 4194304 6098.53
This should provide you with the information necessary to implement your own workflow.
Subject to driver compatibility (unfortunately, CentOS 7.4 broke backwards compatibility with some older OpenMPI implementations), additional OpenMPI versions can be installed upon request.