SNIC
SUPR
SNIC SUPR
SNIC Medium Compute 2022

This Round is Open for Proposals

Monthly evaluation of proposals during the year. To apply, you must be a scientist in Swedish academia, at least at the level of assistant professor.

More information about this round is available at https://snic.se/allocations/compute/medium-allocations/.

This round is open for proposals until 2023-01-01 00:00.

Resources

Resource Centre Upper
Limit
Available Unit Note
Alvis C3SE 20 000 390 000 GPU-h/month

Alvis is a GPU focused cluster for AI/ML research.

Phase 1 (in production since summer 2020) consist of:
  • 1 login node with 4 x Tesla T4 GPU with 16GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 768GB RAM
  • 12 nodes with 2 x Tesla V100 SXM2 GPU with 32GB RAM, 2 x 8 core Intel Xeon Gold 6244 CPU @ 3.60GHz, 768GB RAM
  • 5 nodes with 4 x Tesla V100 SXM2 GPU with 32GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 768GB RAM
  • 20 nodes with 8 x Tesla T4 GPU with 16GB RAM, 2 x 16 core Intel Xeon Gold 6226R CPU @ 2.90GHz, 576GB RAM (1 node with 1536GB)
Phase 2 (in production fall 2021) consist of:
  • 1 data transfer node with 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 85 nodes with 4 x Tesla A40 GPU with 48GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 56 nodes with 4 x Tesla A100 HGX GPU with 40GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 256GB RAM
  • 20 nodes with 4 x Tesla A100 HGX GPU with 40GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 512GB RAM
  • 8 nodes with 4 x Tesla A100 HGX GPU with 80GB RAM, 2 x 32 core Intel Xeon Gold 6338 CPU @ 2GHz, 1024GB RAM
Kebnekaise HPC2N 200 2 000 x 1000 core-h/month Proposals will be evaluated at the end of each month.
Kebnekaise is a heterogeneous computing resource consisting of:

Notes:

  1. Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
  2. Requests for GPU and KNL nodes should be explicitly specified in the user's proposal.
  3. GPU nodes and the KNL nodes are charged differently than ordinary computing nodes.
Kebnekaise Large Memory HPC2N 10 50 x 1000 core-h/month Proposals will be evaluated at the end of each month.
To get access to the Kebnekaise Large Memory resource the proposal must clearly show a need for it, including expected memory size required, and a reason for why the normal nodes are not suitable.

This resource is for access to the 'Large Memory nodes' in Kebnekaise. For standard, GPU and KNL nodes see resource 'Kebnekaise'.

Kebnekaise is a heterogeneous computing resource consisting of:
Tetralith NSC 200 11 500 x 1000 core-h/month Applications are normally evaluated during the last week each month.
Submit your proposal at least one week before the end of a month to be considered for an allocation from the first of the following month. Received proposals will be evaluated against each other and time that become available as project ends at the end of a month will be allocated to the proposed projects accordingly.

Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200 GiB per thin node, 900 GiB per fat node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect to the existing storage. There are 170 nodes in Tetralith equipped with one NVIDIA Tesla T4 GPU each as well as an updated, high-performance NVMe SSD scratch disk of 2TB. The nodes are regular Tetralith thin nodes which have been retrofitted with the GPUs and disks, and are accessible to all of Tetralith's users.
Dardel PDC 200 9 000 x 1000 core-h/month
Rackham UPPMAX 200 3 500 x 1000 core-h/month SNIC Life Science system at UPPMAX, mounts the Crex filesystem
SNIC Life Science system at UPPMAX, mounts the Crex filesystem

Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB. The interconnect is Infiniband.

Click above to show more information about the resource.