Deep learning is increasingly used in every medical imaging segmentation task, but detection of lesions from eye fundus images (EFI) poses many difficult challenges related to sizes, similarity with other lesions and structures, low contrasts, variant conformations. During training, the loss function directs backpropagation learning in the deep convolutional neural networks (DCNN) that are used. It is therefore a fundamental function to the optimization procedure. There exist alternative formulations, such as cross entropy, jaccard and dice. But does the choice of loss influence quality decisively, in the difficult context of EFI lesions? And what about the network architecture? As part of our effort to improve the approaches, we evaluate alternative loss functions, also alternative architectures. We show that the choice of a convenient architecture and loss function can double the quality detecting some of the small and difficult to detect lesions, but we also show that research is still required to find ways to improve the results further.
Deep Learning outperforms prior art in medical imaging tasks. It has been applied to segmentation of Magnetic Resonance Imaging (MRI) scans, where consecutive slices capture relevant body structures for visualization and diagnosis of medical condition. In this work we investigate experimentally the factors that improve segmentation performance of MRI sequences of abdominal organs, including network architecture, pre-training, data augmentation and improvements to loss function. After comparing segmentation network architectures, we choose the best performing one and experimented improvements (data augmentation, training choices). Finally, metrics are fundamental and IoU of each organ in particular, therefore we change loss function to IoU and evaluate the resulting quality. We show that DeepLabV3 is better than competitors by 20 percentage points (pp) or more (depending on the competitor), data augmentation and further enhancements improve performance of DeepLabV3 by 12 percentage points (pp) in average, and that loss function improves performance by up to 13pp as well. Finally, we discuss challenges and further work.
Segmentation of lesions in eye fundus images (EFI) is a difficult problem, due to small sizes, varying morphologies, similarities and lack of contrast. Today, deep learning segmentation architectures are state-of-the-art in most segmentation tasks. But metrics need to be interpreted adequately to avoid wrong conclusions, e.g. we show that 90% global accuracy of the Fully Convolutional Network (FCN) does not mean it segments lesions very well. In this work we test and compare deep segmentation networks applied to find lesions in the Eye Fundus Images, focusing on comparison and how metrics really should be interpreted to avoid mistakes and why. In the light of this analysis, we finalize by discussing further challenges that lie ahead.
Histologic grading from images has become widely accepted as a powerful indicator of prognosis in breast cancer. Automated grading can assist the doctor diagnosing the medical condition. But algorithms still lag behind human experts in this task, as human experts excel in identifying parts, detecting characteristics and relating concepts and semantics. This can be improved by making algorithms distinguish and characterize the most relevant types of objects in the image and characterizing images from that. We propose a three-stage automated approach named OBI (Object-based Identification) with steps: 1. Object-based identification, which identifies the “type of object” of each region and characterizes it; 2. Learn about image, which characterizes distribution of characteristics of those types of objects in image; 3. Determination of degree of malignancy, which assigns a degree of malignancy based on a classifier over object type characteristics (the statistical distribution of characteristics of structures) in the image. Our proof-of-concept prototype uses publicly-available Mytos-Atypia dataset [19] to compare accuracy with alternatives. Results summary: human expert (medical doctor) 84%, classic machine learning 74%, convolution neural networks (CNN), 78%, our approach (OBI) 86%. As future work, we expect to generalize our results to other datasets and problems, explore mimicking knowledge of human concepts further, merge the object-based approach with CNN techniques and adapt it to other medical imaging contexts.
Automated analysis of histological images helps diagnose and further classify breast cancer. Totally automated approaches can be used to pinpoint images for further analysis by the medical doctor. But tissue images are especially challenging for either manual or automated approaches, due to mixed patterns and textures, where malignant regions are sometimes difficult to detect unless they are in very advanced stages. Some of the major challenges are related to irregular and very diffuse patterns, as well as difficulty to define winning features and classifier models. Although it is also hard to segment correctly into regions, due to the diffuse nature, it is still crucial to take low-level features over individualized regions instead of the whole image, and to select those with the best outcomes. In this paper we report on our experiments building a region classifier with a simple subspace division and a feature selection model that improves results over image-wide and/or limited feature sets. Experimental results show modest accuracy for a set of classifiers applied over the whole image, while the conjunction of image division, per-region low-level extraction of features and selection of features, together with the use of a neural network classifier achieved the best levels of accuracy for the dataset and settings we used in the experiments. Future work involves deep learning techniques, adding structures semantics and embedding the approach as a tumor finding helper in a practical Medical Imaging Application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.