Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2020

Interpretable and Annotation-Efficient Learning for Medical Image Computing - Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3ID 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4-8, 2020, Proceedings

Authors
Cardoso, JS; Nguyen, HV; Heller, N; Abreu, PH; Isgum, I; Silva, W; Cruz, R; Amorim, JP; Patel, V; Roysam, B; Zhou, SK; Jiang, SB; Le, N; Luu, K; Sznitman, R; Cheplygina, V; Mateus, D; Trucco, E; Sureshjani, SA;

Publication
iMIMIC/MIL3ID/LABELS@MICCAI

Abstract

2020

Tackling unsupervised multi-source domain adaptation with optimism and consistency

Authors
Pernes, D; Cardoso, JS;

Publication
CoRR

Abstract

2020

Understanding the Impact of Artificial Intelligence on Services

Authors
Ferreira, P; Teixeira, JG; Teixeira, LF;

Publication
EXPLORING SERVICE SCIENCE (IESS 2020)

Abstract
Services are the backbone of modern economies and are increasingly supported by technology. Meanwhile, there is an accelerated growth of new technologies that are able to learn from themselves, providing more and more relevant results, i.e. Artificial Intelligence (AI). While there have been significant advances on the capabilities of AI, the impacts of this technology on service provision are still unknown. Conceptual research claims that AI offers a way to augment human capabilities or position it as a threat to human jobs. The objective of this study is to better understand the impact of AI on service, namely by understanding current trends in AI, and how they are, and will, impact service provision. To achieve this, a qualitative study, following Grounded Theory methodology was performed, with ten Artificial Intelligence experts selected from industry and academia.

2020

Deep Learning for Interictal Epileptiform Discharge Detection from Scalp EEG Recordings

Authors
Lourenco, C; Tjepkema Cloostermans, MC; Teixeira, LF; van Putten, MJAM;

Publication
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019

Abstract
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 - 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis.

2020

Understanding the decisions of CNNs: An in-model approach

Authors
Rio Torto, I; Fernandes, K; Teixeira, LF;

Publication
PATTERN RECOGNITION LETTERS

Abstract
With the outstanding predictive performance of Convolutional Neural Networks on different tasks and their widespread use in real-world scenarios, it is essential to understand and trust these black-box models. While most of the literature focuses on post-model methods, we propose a novel in-model joint architecture, composed by an explainer and a classifier. This architecture outputs not only a class label, but also a visual explanation of such decision, without the need for additional labelled data to train the explainer besides the image class. The model is trained end-to-end, with the classifier taking as input an image and the explainer's resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. Moreover, this approach can be employed with any classifier, provided that the necessary connections to the explainer are made. We also propose a three-phase training process and two alternative custom loss functions that regularise the produced explanations and encourage desired properties, such as sparsity and spatial contiguity. The architecture was validated in two datasets (a subset of ImageNet and a cervical cancer dataset) and the obtained results show that it is able to produce meaningful image- and class-dependent visual explanations, without direct supervision, aligned with intuitive visual features associated with the data. Quantitative assessment of explanation quality was conducted through iterative perturbation of the input image according to the explanation heatmaps. The impact on classification performance is studied in terms of average function value and AOPC (Area Over the MoRF (Most Relevant First) Curve). For further evaluation, we propose POMPOM (Percentage of Meaningful Pixels Outside the Mask) as another measurable criteria of explanation goodness. These analyses showed that the proposed method outperformed state-of-the-art post-model methods, such as LRP (Layer-wise Relevance Propagation).

2020

Using autoencoders as a weight initialization method on deep neural networks for disease detection

Authors
Ferreira, MF; Camacho, R; Teixeira, LF;

Publication
BMC MEDICAL INFORMATICS AND DECISION MAKING

Abstract
Background As of today, cancer is still one of the most prevalent and high-mortality diseases, summing more than 9 million deaths in 2018. This has motivated researchers to study the application of machine learning-based solutions for cancer detection to accelerate its diagnosis and help its prevention. Among several approaches, one is to automatically classify tumor samples through their gene expression analysis. Methods In this work, we aim to distinguish five different types of cancer through RNA-Seq datasets: thyroid, skin, stomach, breast, and lung. To do so, we have adopted a previously described methodology, with which we compare the performance of 3 different autoencoders (AEs) used as a deep neural network weight initialization technique. Our experiments consist in assessing two different approaches when training the classification model - fixing the weights after pre-training the AEs, or allowing fine-tuning of the entire network - and two different strategies for embedding the AEs into the classification network, namely by only importing the encoding layers, or by inserting the complete AE. We then study how varying the number of layers in the first strategy, the AEs latent vector dimension, and the imputation technique in the data preprocessing step impacts the network's overall classification performance. Finally, with the goal of assessing how well does this pipeline generalize, we apply the same methodology to two additional datasets that include features extracted from images of malaria thin blood smears, and breast masses cell nuclei. We also discard the possibility of overfitting by using held-out test sets in the images datasets. Results The methodology attained good overall results for both RNA-Seq and image extracted data. We outperformed the established baseline for all the considered datasets, achieving an average F(1)score of 99.03, 89.95, and 98.84 and an MCC of 0.99, 0.84, and 0.98, for the RNA-Seq (when detecting thyroid cancer), the Malaria, and the Wisconsin Breast Cancer data, respectively. Conclusions We observed that the approach of fine-tuning the weights of the top layers imported from the AE reached higher results, for all the presented experiences, and all the considered datasets. We outperformed all the previous reported results when comparing to the established baselines.

  • 89
  • 324