The cluster has 15744 cores with a peak performance of over 150 Tflops/s. For high parallel performance, the system is equipped with a high bandwidth, low latency QDR InfiniBand interconnect, with full bisectional bandwidth. All nodes have at least 2 GB/core and some nodes have over 8 GB/core. For more information about the system and available software see the HPC2N web-pages.
Access to the Large Memory nodes are handled through the 'Kebnekaise Large Memory' resource.
Note: It is important that requests for GPU nodes and KNL nodes are explicitly specified in the user's proposal. Also to note that the GPU nodes and the KNL nodes will be charged differently than ordinary computing nodes.