This round has been closed as all proposals have been handled.
|
Resource |
Centre |
Total Requested |
Upper Limit |
Available |
Unit |
Note |
▶
|
Hebbe |
C3SE |
1 959 |
100 |
1 100 |
x 1000 core-h/month |
|
|
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's.
The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
- 260 x 64 GB of RAM (249 of these available for SNIC users)
- 46 x 128 GB of RAM (31 of these available for SNIC users)
- 7 x 256 GB of RAM (not available for SNIC users)
- 3 x 512 GB of RAM (1 of these available for SNIC users)
- 1 x 1024 GB of RAM
- 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
- 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each.
There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
|
▶
|
Abisko |
HPC2N |
6 398 |
200 |
1 200 |
x 1000 core-h/month |
Proposals will be evaluated at the end of each month
|
|
The cluster has 15744 cores with a peak performance of over 150 Tflops/s. For high parallel performance, the system is equipped with a high bandwidth, low latency QDR InfiniBand interconnect, with full bisectional bandwidth. All nodes have at least 2 GB/core and some nodes have over 8 GB/core. For more information about the system and available software see the HPC2N web-pages.
|
▶
|
Kebnekaise |
HPC2N |
10 589 |
200 |
2 000 |
x 1000 core-h/month |
Proposals will be evaluated at the end of each month
|
|
Kebnekaise is a heterogeneous computing resource consisting of:
- 432 Compute nodes - Intel Xeon E5-2690v4 (Broadwell), 2x14 cores, 128 GB/node
- 52 Compute nodes - Intel® Xeon® Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 36 GPU nodes - Intel Xeon E5-2690v4 (Broadwell), 2x14 cores, 128 GB/node
- 10 GPU nodes - Intel® Xeon® Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 36 KNL nodes - Intel Xeon Phi 7250 (Knight's Landing), 68 cores, 16 GB MCDRAM/node, 192 GB/node
- 20 Large Memory nodes - Intel Xeon E7-8860v4 (Broadwell), 4x18 cores, 3072 GB/node
Notes:
- Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
- Requests for GPU and KNL nodes should be explicitly specified in the user's proposal.
- GPU nodes and the KNL nodes are charged differently than ordinary computing nodes.
|
▶
|
Kebnekaise Large Memory |
HPC2N |
220 |
20 |
50 |
x 1000 core-h/month |
Proposals will be evaluated at the end of each month
|
|
This resource is for access to the 'Large Memory nodes' in Kebnekaise. For standard, GPU and KNL nodes see resource 'Kebnekaise'.
Kebnekaise is a heterogeneous computing resource consisting of:
- 20 Large Memory nodes - Intel Xeon E7-8860v4 (Broadwell), 4x18 cores, 3072 GB/node
- 432 Compute nodes - Intel Xeon E5-2690v4 (Broadwell), 2x14 cores, 128 GB/node
- 52 Compute nodes - Intel® Xeon® Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 36 GPU nodes - Intel Xeon E5-2690v4 (Broadwell), 2x14 cores, 128 GB/node
- 10 GPU nodes - Intel® Xeon® Gold 6132 Processor (Skylake), 2x14 cores, 192 GB/node
- 36 KNL nodes - Intel Xeon Phi 7250 (Knight's Landing), 68 cores, 16 GB MCDRAM/node, 192 GB/node
|
▶
|
Aurora |
Lunarc |
4 352 |
100 |
1 100 |
x 1000 core-h/month |
|
|
Was opened for test usage at the end of January 2016.
|
▶
|
Tetralith |
NSC |
6 770 |
200 |
14 500 |
x 1000 core-h/month |
Applications are normally evaluated during the last week each month.
|
|
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment.
Use the workload manager Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module".
All Tetralith compute nodes have 32 CPU cores. There is 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node have a local SSD disk where applications can store temporary files (approximately 200GB per node).
All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage.
|
▶
|
Triolith |
NSC |
5 159 |
120 |
3 600 |
x 1000 core-h/month |
Triolith has been replaced by Tetralith.
|
|
Triolith has been replaced with a new system named Tetralith running a CentOS 7 version of the NSC Cluster Software Environment. Awarded Triolith allocations has been transferred to Tetralith and users migrated.
The Tetralith installation takes place in two stages. The first stage has a capacity that exceeds the current computing capacity of Triolith and was available from August 23, 2018. The second part is installed after Triolith is decommissioned and dismounted. NSC plan to have the entire Tetralith in operation by November. Existing centre storage remains and is connected to Tetralith.
Triolith (triolith.nsc.liu.se) was a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. However, Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system and now has a peak performance of 260 Teraflop/sec and 16,368 compute cores. It is equipped with a fast interconnect for high performance for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 (now 944) HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 888 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration.
Triolith have been replaced with a new system, Tetralith, that was made available to users on August 23, 2018. NSC currently plan to keep Triolith in operation and available to users until September 21st, 2018. After that, Triolith will be permanently shut down and decommissioned.
|
|
Beskow |
PDC |
16 889 |
200 |
7 800 |
x 1000 core-h/month |
|
|
|
▶
|
Tegner |
PDC |
2 |
— |
140 |
x 1000 core-h/month |
|
|
Tegner is the pre/post processing cluster for Beskow
|
▶
|
Rackham |
UPPMAX |
4 137 |
100 |
2 000 |
x 1000 core-h/month |
Projects with a large storage requirement are prioritised on Rackham.
|
|
Projects with a large storage requirement are prioritised on Rackham.
Rackham provides 9720 cores in the form of 486 nodes with two 10-core Intel Xeon V4 CPUs each. 4 fat nodes have 1 TB of memory, 32 fat nodes have 256 GB, and the rest have 128 GB.
The interconnect is Infiniband.
|