Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Aurélio Campilho

2020

DR vertical bar GRADUATE: Uncertainty-aware deep learning-based diabetic retinopathy grading in eye fundus images

Authors
Araujo, T; Aresta, G; Mendonca, L; Penas, S; Maia, C; Carneiro, A; Maria Mendonca, AM; Campilho, A;

Publication
MEDICAL IMAGE ANALYSIS

Abstract
Diabetic retinopathy (DR) grading is crucial in determining the adequate treatment and follow up of patient, but the screening process can be tiresome and prone to errors. Deep learning approaches have shown promising performance as computer-aided diagnosis (CAD) systems, but their black-box behaviour hinders clinical application. We propose DR vertical bar GRADUATE, a novel deep learning-based DR grading CAD system that supports its decision by providing a medically interpretable explanation and an estimation of how uncertain that prediction is, allowing the ophthalmologist to measure how much that decision should be trusted. We designed DR vertical bar GRADUATE taking into account the ordinal nature of the DR grading problem. A novel Gaussian-sampling approach built upon a Multiple Instance Learning framework allow DR vertical bar GRADUATE to infer an image grade associated with an explanation map and a prediction uncertainty while being trained only with image-wise labels. DR vertical bar GRADUATE was trained on the Kaggle DR detection training set and evaluated across multiple datasets. In DR grading, a quadratic-weighted Cohen's kappa (kappa) between 0.71 and 0.84 was achieved in five different datasets. We show that high kappa values occur for images with low prediction uncertainty, thus indicating that this uncertainty is a valid measure of the predictions' quality. Further, bad quality images are generally associated with higher uncertainties, showing that images not suitable for diagnosis indeed lead to less trustworthy predictions. Additionally, tests on unfamiliar medical image data types suggest that DR vertical bar GRADUATE allows outlier detection. The attention maps generally highlight regions of interest for diagnosis. These results show the great potential of DR vertical bar GRADUATE as a second-opinion system in DR severity grading.

2019

Image Analysis and Recognition - 16th International Conference, ICIAR 2019, Waterloo, ON, Canada, August 27-29, 2019, Proceedings, Part I

Authors
Karray, F; Campilho, A; Yu, ACH;

Publication
ICIAR

Abstract

2019

Image Analysis and Recognition - 16th International Conference, ICIAR 2019, Waterloo, ON, Canada, August 27-29, 2019, Proceedings, Part II

Authors
Karray, F; Campilho, A; Yu, ACH;

Publication
ICIAR

Abstract

2020

O-MedAL: Online active deep learning for medical image analysis

Authors
Smailagic, A; Costa, P; Gaudio, A; Khandelwal, K; Mirshekari, M; Fagert, J; Walawalkar, D; Xu, SS; Galdran, A; Zhang, P; Campilho, A; Noh, HY;

Publication
WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY

Abstract
Active learning (AL) methods create an optimized labeled training set from unlabeled data. We introduce a novel online active deep learning method for medical image analysis. We extend our MedAL AL framework to present new results in this paper. A novel sampling method queries the unlabeled examples that maximize the average distance to all training set examples. Our online method enhances performance of its underlying baseline deep network. These novelties contribute to significant performance improvements, including improving the model's underlying deep network accuracy by 6.30%, using only 25% of the labeled dataset to achieve baseline accuracy, reducing backpropagated images during training by as much as 67%, and demonstrating robustness to class imbalance in binary and multiclass tasks. This article is categorized under: Technologies > Machine Learning Technologies > Classification Application Areas > Health Care

2020

Optic Disc and Fovea Detection in Color Eye Fundus Images

Authors
Mendonça, AM; Melo, T; Araújo, T; Campilho, A;

Publication
Image Analysis and Recognition - 17th International Conference, ICIAR 2020, Póvoa de Varzim, Portugal, June 24-26, 2020, Proceedings, Part II

Abstract
The optic disc (OD) and the fovea are relevant landmarks in fundus images. Their localization and segmentation can facilitate the detection of some retinal lesions and the assessment of their importance to the severity and progression of several eye disorders. Distinct methodologies have been developed for detecting these structures, mainly based on color and vascular information. The methodology herein described combines the entropy of the vessel directions with the image intensities for finding the OD center and uses a sliding band filter for segmenting the OD. The fovea center corresponds to the darkest point inside a region defined from the OD position and radius. Both the Messidor and the IDRiD datasets are used for evaluating the performance of the developed methods. In the first one, a success rate of 99.56% and 100.00% are achieved for OD and fovea localization. Regarding the OD segmentation, the mean Jaccard index and Dice’s coefficient obtained are 0.87 and 0.94, respectively. The proposed methods are also amongst the top-3 performing solutions submitted to the IDRiD online challenge. © Springer Nature Switzerland AG 2020.

2020

CLASSIFICATION OF LUNG NODULES IN CT VOLUMES USING THE LUNG-RADSTM GUIDELINES WITH UNCERTAINTY PARAMETERIZATION

Authors
Ferreira, CA; Aresta, G; Pedrosa, J; Rebelo, J; Negrao, E; Cunha, A; Ramos, I; Campilho, A;

Publication
2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020)

Abstract
Currently, lung cancer is the most lethal in the world. In order to make screening and follow-up a little more systematic, guidelines have been proposed. Therefore, this study aimed to create a diagnostic support approach by providing a patient label based on the LUNG-RADSTM guidelines. The only input required by the system is the nodule centroid to take the region of interest for the input of the classification system. With this in mind, two deep learning networks were evaluated: a Wide Residual Network and a DenseNet. Taking into account the annotation uncertainty we proposed to use sample weights that are introduced in the loss function, allowing nodules with a high agreement in the annotation process to take a greater impact on the training error than its counterpart. The best result was achieved with the Wide Residual Network with sample weights achieving a nodule-wise LUNG-RADSTM labelling accuracy of 0.735 +/- 0.003.

  • 19
  • 48