SNIC
SUPR
SNIC SUPR
SNIC Large Compute Fall 2019

This Round is Open for Proposals

More information about this round is available at https://snic.se/allocations/compute/large-allocations/.

The deadline for submitting proposals is 2019-10-11 15:00.

Resources

Resource Centre Available Unit Note
Hebbe C3SE 600 x 1000 core-h/month
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's. The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
  • 260 x 64 GB of RAM (249 of these available for SNIC users)
  • 46 x 128 GB of RAM (31 of these available for SNIC users)
  • 7 x 256 GB of RAM (not available for SNIC users)
  • 3 x 512 GB of RAM (1 of these available for SNIC users)
  • 1 x 1024 GB of RAM
  • 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
  • 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each. There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
Kebnekaise HPC2N 3 200 x 1000 core-h/month
Kebnekaise is a heterogeneous computing resource consisting of:

Notes:

  1. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  2. Requests for GPU and KNL nodes should be explicitly specified in the user's proposal.
  3. GPU nodes and the KNL nodes are charged differently than ordinary computing nodes.
Kebnekaise Large Memory HPC2N 450 x 1000 core-h/month
This resource is for access to the 'Large Memory nodes' in Kebnekaise. For standard, GPU and KNL nodes see resource 'Kebnekaise'.

Kebnekaise is a heterogeneous computing resource consisting of:
Aurora Lunarc 500 x 1000 core-h/month
Was opened for test usage at the end of January 2016.
Tetralith NSC 16 000 x 1000 core-h/month
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage.
Beskow PDC 11 200 x 1000 core-h/month
Tegner PDC 210 x 1000 core-h/month
Tegner is the pre/post processing cluster for Beskow
Crex 1 UPPMAX 1 000 000 GiB Storage attached to Rackham and Snowy at UPPMAX
Storage resource only available to projects using Rackham or Snowy at UPPMAX.

Active data storage for SNIC UPPMAX projects.
Rackham UPPMAX 1 000 x 1000 core-h/month
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Click above to show more information about the resource.