Back to HPC Home Page

Linux Compute Cluster 3 (LCC3)

Current workload

Message of the day


Applying for user accounts

As this system is being used as a teaching only system, it is recommended to contact zid-cluster-admin@uibk.ac.at for further information.  

Cluster usage

All HPC clusters at the University of Innsbruck hosted at the ZID comply with a common set of usage regulations, which are summarized in the following sub-sections. Please do take the time to read the guidelines carefully, in order to make optimal use of our HPC systems.

First time instructions

See this quick start tutorial to jump the first hurdles after your account was activated:

  • Login to the cluster
  • Change your password
  • Copy files from and to the clusters

Setting up the software (modules) environment

There are a variety of application and software development packages available on our cluster systems. In order to utilize these efficiently and to avoid inter-package conflicts, we employ the Environment Modules package on all of our cluster systems.

See the modules environment tutorial to learn how to customize your personal software configuration.

Submitting jobs to the cluster

On LCC3, the distribution of jobs is handled by the Slurm workload manager.

See the Slurm usage tutorial to find out how to appropriately submit your jobs to the batch scheduler, i.e. the queuing system.

Storing your data

Every time you login to the cluster, all storage areas available to you, i.e. the corresponding directories, as well as the used percentages, are listed before the current important messages. In general, the first two list items are of major importance:

  1. Your home directory (also available via the environment variable $HOME) provides you with a small but highly secure storage area, which is backed up every day. This is the place to store your important data, such as source code, valuable input files, etc.
  2. The secondly listed storage area represents the cluster's scratch space. It is accessible via a softlink /scratch. This area provides you with enough space for large data sets. Use this storage for writing the output of your calculations and for large input data.
    Please note, that the scratch space is designed for size and speed and is therefore no high availability storage. So make sure to secure important files regularly, as total data loss - though improbable - cannot be excluded.

Further listed storage areas are mostly for data exchange purposes. Please contact the ZID cluster administration, if you feel unsure about storage usage.

Available software packages

On each of our clusters we provide a broad variety of software packages, such as compilers, parallel environments, numerical libraries, scientific applications, etc.

Have a look at the available packages (University of Innsbruck only - use the command module avail otherwise) on lcc3 and please contact the ZID cluster administration if your required software is not among the list.

Hardware components

This cluster consists of 15 compute nodes (with a total of 180 cores), one storage server and one login node. All nodes are connected through an Infiniband network. The storage system offers 16TB of storage.

Contact

General e-mail address:zid-cluster-admin@uibk.ac.at 
HPC staff:Phone number:
Gregor Anich0512/507-23305
Michael Fink0512/507-23211
Helmut Pedit0512/507-23219
Martin Pöll0512/507-23223
Hermann Schwärzler0512/507-23347
Kashif Siddique0512/507-23314
Martin Thaler0512/507-23257

Statement of Service

Maintenance Status and Recommendations for ZID HPC Systems

Nach oben scrollen