Continued development of the Chunks and Tasks framework

Dnr:

SNIC 2017/1-216

Type:

SNAC Medium

Principal Investigator:

Elias Rudberg

Affiliation:

Uppsala universitet

Start Date:

2017-05-01

End Date:

2018-05-01

Primary Classification:

10205: Programvaruteknik

Secondary Classification:

10407: Teoretisk kemi

Tertiary Classification:

10105: Beräkningsmatematik

Webpage:

http://chunks-and-tasks.org/

Allocation

Abstract

The goal of this project is to continue our work on the Chunks and Tasks programming model, see our article "Chunks and Tasks: a programming model for parallelization of dynamic algorithms", http://dx.doi.org/10.1016/j.parco.2013.09.006 which was recently published in Parallel Computing. See also our latest paper "Locality-aware parallel block-sparse matrix-matrix multiplication using the Chunks and Tasks programming model", http://dx.doi.org/10.1016/j.parco.2016.06.005. The project regards the development of the Chunks and Tasks programming model for parallel implementation of methods that require dynamic distribution of both work and data. Such methods are difficult to implement using standard languages or libraries such as MPI that leave it to the user to provide the distribution of both work and data. In an application program that uses the Chunks and Tasks programming model, the user defines the algorithm in terms of chunks and tasks without specifying where the work should be performed or how the data should be distributed. Our pilot C++ Chunks and Tasks runtime library implementation uses MPI and pthreads to distribute work and data of Chunks and Tasks application programs on clusters of multicore machines. This project will be used for development and evaluation of the Chunks and Tasks model and library implementations as well as for starting the work on distributed-memory parallelization of the Ergo quantum chemistry code (http://ergoscf.org) using Chunks and Tasks. We do have other medium projects on Triolith and Beskow, but having access to Rackham as well would be very useful for us since we want to make sure our parallelization framework works well on several different systems.