2018
Authors
Costa, P; Galdran, A; Meyer, MI; Niemeijer, M; Abramoff, M; Mendonca, AM; Campilho, A;
Publication
IEEE TRANSACTIONS ON MEDICAL IMAGING
Abstract
In medical image analysis applications, the availability of the large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a generative adversarial network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality.
2017
Authors
Savelli, B; Bria, A; Galdran, A; Marrocco, C; Molinara, M; Campilho, A; Tortorella, F;
Publication
2017 IEEE 30TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)
Abstract
Assessment of retinal vessels is fundamental for the diagnosis of many disorders such as heart diseases, diabetes and hypertension. The imaging of retina using advanced fundus camera has become a standard in computer-assisted diagnosis of opthalmic disorders. Modern cameras produce high quality color digital images, but during the acquisition process the light reflected by the retinal surface generates a luminosity and contrast variation. Irregular illumination can introduce severe distortions in the resulting images, decreasing the visibility of anatomical structures and consequently demoting the performance of the automated segmentation of these structures. In this paper, a novel approach for illumination correction of color fundus images is proposed and applied as preprocessing step for retinal vessel segmentation. Our method builds on the connection between two different phenomena, shadows and haze, and works by removing the haze from the image in the inverted intensity domain. This is shown to be equivalent to correct the nonuniform illumination in the original intensity domain. We tested the proposed method as preprocessing stage of two vessel segmentation methods, one unsupervised based on mathematical morphology, and one supervised based on deep learning Convolutional Neural Networks (CNN). Experiments were performed on the publicly available retinal image database DRIVE. Statistically significantly better vessel segmentation performance was achieved in both test cases when illumination correction was applied.
2013
Authors
Kamel, M; Campilho, A;
Publication
Lecture Notes in Computer Science
Abstract
2016
Authors
Campilho, A; Karray, F;
Publication
ICIAR
Abstract
2014
Authors
Campilho, A; Kamel, M;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
2015
Authors
Kamel, M; Campilho, AJC;
Publication
ICIAR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.