SNIC
SUPR
SNIC SUPR
C3SE MStud 2018

Decided

This round has been closed as all proposals have been handled.

Note! This round is only available for teachers in the M program at Chalmers who have been told by Mikael Enelund to apply (i.e. if you are uncertain, this round is probably not for you.)

More information about this round is available at https://www.c3se.chalmers.se/about/Hebbe/#mstud.

Resources

Resource Centre Total
Requested
Available Unit Note
Hebbe C3SE 236 58 x 1000 core-h/month Limited to mstud partition consisting of 4 nodes.
The Hebbe cluster is built on Intel Xeon E5-2650v3 (code-named "haswell") CPU's. The system has a total of 323 compute nodes (total of 6480 cores) with 27 TiB of RAM and 6 GPUs. More specific:
  • 260 x 64 GB of RAM (249 of these available for SNIC users)
  • 46 x 128 GB of RAM (31 of these available for SNIC users)
  • 7 x 256 GB of RAM (not available for SNIC users)
  • 3 x 512 GB of RAM (1 of these available for SNIC users)
  • 1 x 1024 GB of RAM
  • 4 x 64 GB of RAM and NVIDIA Tesla K40 GPU (2 of these available for SNIC users)
  • 2 x 256 GB of RAM and NVIDIA k4200 for remote graphics
Each node have 2 CPUs with 10 cores each. There's a 10Gigabit Ethernet network used for logins, and a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations and filesystem access. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.
Vera C3SE 0 90 x 1000 core-h/month
The Vera cluster is built on Intel Xeon Gold 6130 (code-named "skylake") CPU's with 32 cores and 64 threads per node. In total 236 compute nodes (total of 7552 cores) with a total of 27 TiB of RAM and 4 GPUs. More specific:
  • 205 x 92 GB of RAM
  • 19 x 192 GB of RAM
  • 6 x 384 GB of RAM
  • 2 x 768 GB of RAM
  • 2 x 384 GB of RAM and 2 NVIDIA Tesla V100 GPU:s each
  • 2 login nodes with 192 GB of RAM and NVIDIA P2000 for remote graphics
There's a 25Gigabit Ethernet network used for logins and storage, a dedicated management network and an Infiniband high-speed/low-latency network for parallel computations. The nodes are equipped with Mellanox ConnectX-3 FDR Infiniband 56Gbps HCA's.

Click above to show more information about the resource.