SNIC
SUPR
SNIC SUPR
DCS 2018

Closed for New Proposals

This round is closed for new proposals. Submitted proposals will be handled.

More information about this round is available at https://www.nsc.liu.se/storage/snic-centrestorage/cs2014-allocations/#dcs.

Resources

Resource Centre Available Capacity Unit Note
Centre Storage NSC 2 000 000 GiB
Project storage for SNAC (Small, Medium and Large), LiU Local and DCS projects.
DCS NSC 2 000 TiB Evaluation of large storage allocations usually coincide with the processing of SNAC large compute allocations.
Applications for large storage allocations will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.

NSC offers large (>50 TiB) storage allocations on our new high performance Centre Storage/DCS system. Importantly, these large storage (DCS) allocations are for projects requiring active storage, NOT archiving. Alternative archiving resources are available through SNIC (see e.g. http://docs.snic.se/wiki/SweStore). DCS applications should demonstrate how data stored on the new Centre Storage will be used e.g. data processing/reduction, data mining, visualization, analytics etc. Proposals will be evaluated at least twice per year. Usually to coincide with the processing of SNAC large compute allocations.
Tetralith NSC 35 x 1000 core-h/month This is core time for the special analysis nodes on Tetralith.
Tetralith, tetralith.nsc.liu.se, runs a CentOS 7 version of the NSC Cluster Software Environment. This means that most things are very familiar to Triolith users. You still use Slurm (e.g sbatch, interactive, ...) to submit your jobs. ThinLinc is available on the login nodes. Applications are selected using "module". All Tetralith compute nodes have 32 CPU cores. There will be 1832 "thin" nodes with 96 GiB of primary memory (RAM) and 60 "fat" nodes with 384 GiB. Each compute node will have a local SSD disk where applications can store temporary files (approximately 200GB per node). All Tetralith nodes are interconnected with a 100 Gbps Intel Omni-Path network which is also used to connect the existing storage. The Omni-Path network works in a similar way to the FDR Infiniband network in Triolith (e.g with a fat-tree topology). The Tetralith installation will take place in two phases. The first phase consist of 644 nodes and have a capacity that exceeds the current computing capacity of Triolith. The first phase was made available to users on August 23, 2018. Triolith was turned off September 21, 2018. After this, the second phase of the Tetralith installation will begin. NSC plans to have the entire Tetralith in operation no later than December 31st (i.e for the next round of SNAC Large projects.)
Triolith NSC 35 x 1000 core-h/month Triolith has been replaced by Tetralith.
Triolith (triolith.nsc.liu.se) was a capability cluster with a total of 24320 cores and a peak performance of 428 Tflops/s. However, Triolith was shrunk by 576 nodes on April 3rd, 2017 as a result of a delay in funding a replacement system and now has a peak performance of 260 Teraflop/sec and 16,368 compute cores. It is equipped with a fast interconnect for high performance for parallel applications. The operating system is CentOS 6.x x86_64. Each of the 1520 (now 944) HP SL230s compute servers is equipped with two Intel E5-2660 (2.2 GHz Sandybridge) processors with 8 cores each (i.e. 16 cores per compute server). 56 of the compute servers have 128 GiB memory each and the remaining 888 have 32 GiB each. The fast interconnect is Infiniband from Mellanox (FDR IB, 56 Gb/s) in a 2:1 blocking configuration. Triolith have been replaced with a new system, Tetralith, that was made available to users on August 23, 2018. NSC currently plan to keep Triolith in operation and available to users until September 21st, 2018. After that, Triolith will be permanently shut down and decommissioned.

Click the ▶ to show more information about the resource.