Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Aurélio Campilho

2019

Wide Residual Network for Lung-Rads (TM) Screening Referral

Autores
Ferreira, CA; Aresta, G; Cunha, A; Mendonca, AM; Campilho, A;

Publicação
2019 6TH IEEE PORTUGUESE MEETING IN BIOENGINEERING (ENBENG)

Abstract
Lung cancer has an increasing preponderance in worldwide mortality, demanding for the development of efficient screening methods. With this in mind, a binary classification method using Lung-RADS (TM) guidelines to warn changes in the screening management is proposed. First, having into account the lack of public datasets for this task, the lung nodules in the LIDC-IDRI dataset were re-annotated to include a Lung-RADS (TM)-based referral label. Then, a wide residual network is used for automatically assessing lung nodules in 3D chest computed tomography exams. Unlike the standard malignancy prediction approaches, the proposed method avoids the need to segment and characterize lung nodules, and instead directly defines if a patient should be submitted for further lung cancer tests. The system achieves a nodule-wise accuracy of 0.87 +/- 0.02.

2019

Analysis of the performance of specialists and an automatic algorithm in retinal image quality assessment

Autores
Wanderley, DS; Araujo, T; Carvalho, CB; Maia, C; Penas, S; Carneiro, A; Mendonca, AM; Campilho, A;

Publicação
2019 6TH IEEE PORTUGUESE MEETING IN BIOENGINEERING (ENBENG)

Abstract
This study describes a novel dataset with retinal image quality annotation, defined by three different retinal experts, and presents an inter-observer analysis for quality assessment that can be used as gold-standard for future studies. A state-of-the-art algorithm for retinal image quality assessment is also analysed and compared against the specialists performance. Results show that, for 71% of the images present in the dataset, the three experts agree on the given image quality label. The results obtained for accuracy, specificity and sensitivity when comparing one expert against another were in the ranges [83.0 - 85.2]%, [72.7 - 92.9]% and [80.0 - 94.7]%, respectively. The evaluated automatic quality assessment method, despite not being trained on the novel dataset, presents a performance which is within inter-observer variability.

2018

MedAL: Accurate and Robust Deep Active Learning for Medical Image Analysis

Autores
Smailagic, A; Costa, P; Noh, HY; Walawalkar, D; Khandelwal, K; Galdran, A; Mirshekari, M; Fagert, J; Xu, SS; Zhang, P; Campilho, A;

Publicação
2018 17TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA)

Abstract
Deep learning models have been successfully used in medical image analysis problems but they require a large amount of labeled images to obtain good performance. However, such large labeled datasets are costly to acquire. Active learning techniques can be used to minimize the number of required training labels while maximizing the model's performance. In this work, we propose a novel sampling method that queries the unlabeled examples that maximize the average distance to all training set examples in a learned feature space. We then extend our sampling method to define a better initial training set, without the need for a trained model, by using Oriented FAST and Rotated BRIEF (ORB) feature descriptors. We validate MedAL on 3 medical image datasets and show that our method is robust to different dataset properties. MedAL is also efficient, achieving 80% accuracy on the task of Diabetic Retinopathy detection using only 425 labeled images, corresponding to a 32% reduction in the number of required labeled examples compared to the standard uncertainty sampling technique, and a 40% reduction compared to random sampling.

2019

EyeWeS: Weakly Supervised Pre-Trained Convolutional Neural Networks for Diabetic Retinopathy Detection

Autores
Costa, P; Araujo, T; Aresta, G; Galdran, A; Mendonca, AM; Smailagic, A; Campilho, A;

Publicação
PROCEEDINGS OF MVA 2019 16TH INTERNATIONAL CONFERENCE ON MACHINE VISION APPLICATIONS (MVA)

Abstract
Diabetic Retinopathy (DR) is one of the leading causes of preventable blindness in the developed world. With the increasing number of diabetic patients there is a growing need of an automated system for DR detection. We propose EyeWeS, a method that not only detects DR in eye fundus images but also pinpoints the regions of the image that contain lesions, while being trained with image labels only. We show that it is possible to convert any pre-trained convolutional neural network into a weakly-supervised model while increasing their performance and efficiency. EyeWeS improved the results of Inception V3 from 94:9% Area Under the Receiver Operating Curve (AUC) to 95:8% AUC while maintaining only approximately 5% of the Inception V3's number of parameters. The same model is able to achieve 97:1% AUC in a cross-dataset experiment.

2019

UNCERTAINTY-AWARE ARTERY/VEIN CLASSIFICATION ON RETINAL IMAGES

Autores
Galdran, A; Meyer, M; Costa, P; Mendonca,; Campilho, A;

Publicação
2019 IEEE 16TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2019)

Abstract
The automatic differentiation of retinal vessels into arteries and veins (A/V) is a highly relevant task within the field of retinal image analysis. however, due to limitations of retinal image acquisition devices, specialists can find it impossible to label certain vessels in eye fundus images. In this paper, we introduce a method that takes into account such uncertainty by design. For this, we formulate the A/V classification task as a four-class segmentation problem, and a Convolutional Neural Network is trained to classify pixels into background, A/V, or uncertain classes. The resulting technique can directly provide pixelwise uncertainty estimates. In addition, instead of depending on a previously available vessel segmentation, the method automatically segments the vessel tree. Experimental results show a performance comparable or superior to several recent A/V classification approaches. In addition, the proposed technique also attains state-of-the-art performance when evaluated for the task of vessel segmentation, generalizing to data that, was not used during training, even with considerable differences in terms of appearance and resolution.

2019

BACH: Grand challenge on breast cancer histology images

Autores
Aresta, G; Araujo, T; Kwok, S; Chennamsetty, SS; Safwan, M; Alex, V; Marami, B; Prastawa, M; Chan, M; Donovan, M; Fernandez, G; Zeineh, J; Kohl, M; Walz, C; Ludwig, F; Braunewell, S; Baust, M; Vu, QD; To, MNN; Kim, E; Kwak, JT; Galal, S; Sanchez Freire, V; Brancati, N; Frucci, M; Riccio, D; Wang, YQ; Sun, LL; Ma, KQ; Fang, JN; Kone, ME; Boulmane, LS; Campilho, ARLO; Eloy, CTRN; Polonia, AONO; Aguiar, PL;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Breast cancer is the most common invasive cancer in women, affecting more than 10% of women worldwide. Microscopic analysis of a biopsy remains one of the most important methods to diagnose the type of breast cancer. This requires specialized analysis by pathologists, in a task that i) is highly time and cost-consuming and ii) often leads to nonconsensual results. The relevance and potential of automatic classification algorithms using hematoxylin-eosin stained histopathological images has already been demonstrated, but the reported results are still sub-optimal for clinical use. With the goal of advancing the state-of-the-art in automatic classification, the Grand Challenge on BreAst Cancer Histology images (BACH) was organized in conjunction with the 15th International Conference on Image Analysis and Recognition (ICIAR 2018). BACH aimed at the classification and localization of clinically relevant histopathological classes in microscopy and whole-slide images from a large annotated dataset, specifically compiled and made publicly available for the challenge. Following a positive response from the scientific community, a total of 64 submissions, out of 677 registrations, effectively entered the competition. The submitted algorithms improved the state-of-the-art in automatic classification of breast cancer with microscopy images to an accuracy of 87%. Convolutional neuronal networks were the most successful methodology in the BACH challenge. Detailed analysis of the collective results allowed the identification of remaining challenges in the field and recommendations for future developments. The BACH dataset remains publicly available as to promote further improvements to the field of automatic classification in digital pathology.

  • 16
  • 48