SNIC
SUPR
SNIC SUPR
Deep learning in medical image analysis
Dnr:

SNIC 2018/3-406

Type:

SNAC Medium

Principal Investigator:

Tommy Löfstedt

Affiliation:

Umeå universitet

Start Date:

2018-10-01

End Date:

2019-10-01

Primary Classification:

30199: Other Basic Medicine

Secondary Classification:

10207: Computer Vision and Robotics (Autonomous Systems)

Webpage:

Allocation

Abstract

Cancer treatment has been identified by the World Health Organization as a priority in their strategic development goals for the period 2016-2020. During 2012, 14.1 million new cases of cancer were reported, of which 8.2 million died because of the disease. In Sweden, one out of three persons will suffer from cancer at some point during their life span. An aging population will result in more cancer cases, at the same time that fewer citizens will be working. This combination requires the health case system to become more resource efficient. Deep learning offers new perspectives in this regard. Deep convolutional networks can be used to automate routine and time-consuming parts of the radiotherapy work-flow. These methods include automatic tumor and risk organ segmentation, synthetic CT generation for dose planning and registration for optical flow adjustments. Deep learning is able to significantly make these work-flows more time-efficient by automating parts that would otherwise take a long time and tie-up human resources, such as oncologists and radiation nurses. For instance, segmenting a patient with head-neck cancer may take up to six hours, while an automatic segmentation may take less than a second. Similarly, a CT scan can take up to 30 minutes, while a synthetic CT image may be generated from an MR image in less than a second. The last few years have seen a break-through in deep learning usage and methodology. And current methods have been shown to hold up to clinical requirements. In this project, we will utilise and evaluate such methods for use in radiotherapy applications. More specifically, we will work on segmentation of tumors and risk organs, synthetic CT generation, and registration adjustments. Such methods only take an instant to apply, but may take weeks to train and adapt to the clinical data available on a desktop computer with a graphics processing unit. We would therefore like use the parallel infrastructure at HPC2N in order to scale the training and hyper-parameter searches to larger training data and more complex models with more parameters.