SNIC
SUPR
SNIC SUPR
Simulations of cortical spiking neural network models and development of brain-like computing network architectures for applications in data analytics
Dnr:

SNIC 2017/1-429

Type:

SNAC Medium

Principal Investigator:

Pawel Herman

Affiliation:

Kungliga Tekniska högskolan

Start Date:

2017-11-01

End Date:

2018-11-01

Primary Classification:

10201: Computer Sciences

Secondary Classification:

10203: Bioinformatics (Computational Biology) (applications to be 10610)

Tertiary Classification:

20699: Other Medical Engineering

Webpage:

Allocation

Abstract

The proposal specified in this application is a continuation of the previous project aimed at developing large scale neural network models. Similarly, we intend to utilise supercomputing infrastructure for the development and application of two major families of models – spiking neural networks for brain simulations and non-spiking network architectures for complex pattern recognition problems and spatio-temporal quantification. With regard to simulations of biologically detailed cortical systems, the main focus is on memory models within the framework of modular attractor memory networks. We have developed large-scale models in a parallel pNeuron and NEST simulators and have validated different alternative implementations. The main emphasis is on studying the dynamics of working memory models that correlate with information maintenance phenomena. This analysis can now be performed in close relationship with biological mesoscopic recordings obtained from our experimental collaborators. Recently we have also delved into the problem of memory encoding and have thus developed Hebbian plasticity rule to phenomenologically account for synaptic learning processes. Currently we are testing its functionality and study effects on the memory network dynamics. In addition, by incorporating synaptic learning into our models we have an opportunity to build larger cognitive architectures accounting also for the long-tem memory storage. This will pave the way for future more comprehensive investigations into functional and dynamical aspects of the brain’s holistic cortical memory system. The major difference of this application when compared to the previous one lies in a more applied nature of our developments in the realm of non-spiking abstract neural network architectures. We intend to tailor our methods to concrete applications, and the main emphasis will be on evaluating them in the context of specific instances of pattern recognition and temporal prediction problems. Firstly, based on our earlier studies involving exploratory analyses of static neuroimaging data, such as PET or fMRI, we are going to advance our efforts to employ our extended neural network architectures for extracting patterns in the spatio-temporal domain (quantification of multivariate stochastic processes) of EEG and MEG data. Secondly, we continue our collaborative work with industrial partners on wind power prediction, anomaly detection, and medical diagnostics problems. The validation of our brain-like connectionist approach involves a comparative analysis with classical machine learning approaches. Finally, we intend to test our sequence learning networks for canonical text prediction. Most simulation cases are in need of a large number of cores with relatively low memory load per core. In selected instances, simulations may take up to 6-7 hours. Finally, for the applications we will rely on data storage to train the models. We also plan to test the deployment of our methods and more traditional machine learning approaches on GPU clusters for benchmarking purposes. Unlike in the previous project (SNIC 2017/1-167), we already have at our disposal specific algorithms (our reference) to run on GPUs (written using the TensorFlow library) and therefore we would like to request some, even moderate: 3-5k core-hour per month, access to Tegner.