SNIC
SUPR
SNIC SUPR
SNIC Medium Compute 2020

This Round is Open for Proposals

More information about this round is available at https://snic.se/allocations/compute/medium-allocations/.

This round is open for proposals until 2021-01-01 00:00.

Resources

Resource Centre Upper
Limit
Available Unit Note
Hebbe C3SE 100 1 100 x 1000 core-h/month Please note that Hebbe will be retired from SNIC use at the end of 2020!
Please note that Hebbe will be decommissioned at the end of 2020, active project will most likely be moved to another SNIC resource at that time.

The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's. The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
  • 260 x 64 GB of RAM (249 of these available for SNIC users)
  • 46 x 128 GB of RAM (31 of these available for SNIC users)
  • 7 x 256 GB of RAM (not available for SNIC users)
  • 3 x 512 GB of RAM (1 of these available for SNIC users)
  • 1 x 1024 GB of RAM
  • 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
  • 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each. There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
Abisko HPC2N 2 000 2 000 x 1000 core-h/month
Abisko is technically decommissioned, but it's still live and kicking and can be used for the SNIC 2020/5-176 project for a couple of weeks

The cluster has 15744 cores with a peak performance of over 150 Tflops/s. For high parallel performance, the system is equipped with a high bandwidth, low latency QDR InfiniBand interconnect, with full bisectional bandwidth. All nodes have at least 2 GB/core and some nodes have over 8 GB/core. For more information about the system and available software see the HPC2N web-pages.
Kebnekaise HPC2N 200 2 000 x 1000 core-h/month Proposals will be evaluated at the end of each month.
Kebnekaise is a heterogeneous computing resource consisting of:

Notes:

  1. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  2. Requests for GPU and KNL nodes should be explicitly specified in the user's proposal.
  3. GPU nodes and the KNL nodes are charged differently than ordinary computing nodes.
Kebnekaise Large Memory HPC2N 10 50 x 1000 core-h/month Proposals will be evaluated at the end of each month.
This resource is for access to the 'Large Memory nodes' in Kebnekaise. For standard, GPU and KNL nodes see resource 'Kebnekaise'.

Kebnekaise is a heterogeneous computing resource consisting of:
Aurora LUNARC 100 1 100 x 1000 core-h/month
Note that the Aurora resource will not be available as a SNIC-resources post 2020-12-31

Was opened for test usage at the end of January 2016.
Tetralith NSC 200 11 500 x 1000 core-h/month Applications are normally evaluated during the last week each month.
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage.
Beskow PDC 200 7 800 x 1000 core-h/month A small allocation on Tegner will be appended for allocations that are granted on Beskow.
A small allocation on Tegner for pre/post-processing will be appended for allocations that are granted on Beskow.
Tegner PDC 140 x 1000 core-h/month
Tegner is the pre/post processing cluster for Beskow
Rackham UPPMAX 100 2 000 x 1000 core-h/month
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.
Snowy UPPMAX 100 1 000 x 1000 core-h/month

Click above to show more information about the resource.