3D medical image segmentation with labeled and unlabeled data using autoencoders at the example of liver segmentation in CT images

03/17/2020
by   Cheryl Sital, et al.
7

Automatic segmentation of anatomical structures with convolutional neural networks (CNNs) constitutes a large portion of research in medical image analysis. The majority of CNN-based methods rely on an abundance of labeled data for proper training. Labeled medical data is often scarce, but unlabeled data is more widely available. This necessitates approaches that go beyond traditional supervised learning and leverage unlabeled data for segmentation tasks. This work investigates the potential of autoencoder-extracted features to improve segmentation with a CNN. Two strategies were considered. First, transfer learning where pretrained autoencoder features were used as initialization for the convolutional layers in the segmentation network. Second, multi-task learning where the tasks of segmentation and feature extraction, by means of input reconstruction, were learned and optimized simultaneously. A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images. For both strategies, experiments were conducted with varying amounts of labeled and unlabeled training data. The proposed learning strategies improved results in 75% of the experiments compared to training from scratch and increased the dice score by up to 0.040 and 0.024 for a ratio of unlabeled to labeled training data of about 32 : 1 and 12.5 : 1, respectively. The results indicate that both training strategies are more effective with a large ratio of unlabeled to labeled training data.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset