2023
Authors
Amorim, JP; Abreu, PH; Santos, J; Cortes, M; Vila, V;
Publication
INFORMATION PROCESSING & MANAGEMENT
Abstract
Deep Learning has reached human-level performance in several medical tasks including clas-sification of histopathological images. Continuous effort has been made at finding effective strategies to interpret these types of models, among them saliency maps, which depict the weights of the pixels on the classification as an heatmap of intensity values, have been by far the most used for image classification. However, there is a lack of tools for the systematic evaluation of saliency maps, and existing works introduce non-natural noise such as random or uniform values. To address this issue, we propose an approach to evaluate the faithfulness of the saliency maps by introducing natural perturbations in the image, based on oppose-class substitution, and studying their impact on evaluation metrics adapted from saliency models. We validate the proposed approach on a breast cancer metastases detection dataset PatchCamelyon with 327,680 patches of histopathological images of sentinel lymph node sections. Results show that GradCAM, Guided-GradCAM and gradient-based saliency map methods are sensitive to natural perturbations and correlate to the presence of tumor evidence in the image. Overall, this approach proves to be a solution for the validation of saliency map methods without introducing confounding variables and shows potential for application on other medical imaging tasks.
2023
Authors
Amorim, JP; Abreu, PH; Fernandez, A; Reyes, M; Santos, J; Abreu, MH;
Publication
IEEE REVIEWS IN BIOMEDICAL ENGINEERING
Abstract
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using deep learning techniques, interpretability and oncology as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
2023
Authors
Salazar, T; Fernandes, M; Araújo, H; Abreu, PH;
Publication
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
2023
Authors
Amorim, JP; Abreu, PH; Fernandez, A; Reyes, M; Santos, J; Abreu, MH;
Publication
IEEE REVIEWS IN BIOMEDICAL ENGINEERING
Abstract
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using deep learning techniques, interpretability and oncology as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
2022
Authors
Frias, E; Pinto, J; Sousa, R; Lorenzo, H; Diaz Vilarino, L;
Publication
JOURNAL OF COMPUTING IN CIVIL ENGINEERING
Abstract
Advances in technology are leading to more and more devices integrating sensors capable of acquiring data quickly and with high accuracy. Point clouds are no exception. Therefore, there is increased research interest in the large amount of available light detection and ranging (LiDAR) data by point cloud classification using artificial intelligence. Nevertheless, point cloud labeling is a time-consuming task. Hence the amount of labeled data is still scarce. Data synthesis is gaining attention as an alternative to increase the volume of classified data. At the same time, the amount of Building Information Models (BIMs) provided by manufacturers on website databases is increasing. In line with these recent trends, this paper presents a deep-learning framework for classifying point cloud objects based on synthetic data sets created from BIM objects. The method starts by transforming BIM objects into point clouds deriving a data set consisting of 21 object classes characterized with various perturbation patterns. Then, the data set is split into four subsets to carry out the evaluation of synthetic data on the implemented flexible two-dimensional (2D) deep neural framework. In the latter, binary or greyscale images can be generated from point clouds by both orthographic or perspective projection to feed the network. Moreover, the surface variation feature was computed in order to aggregate more geometric information to images and to evaluate how it influences the object classification. The overall accuracy is over 85% in all tests when orthographic images are used. Also, the use of greyscale images representing surface variation improves performance in almost all tests although the computation of this feature may not be robust in point clouds with complex geometry or perturbations. (C) 2022 American Society of Civil Engineers.
2022
Authors
Oliveira, J; Renna, F; Costa, PD; Nogueira, M; Oliveira, C; Ferreira, C; Jorge, A; Mattos, S; Hatem, T; Tavares, T; Elola, A; Rad, AB; Sameni, R; Clifford, GD; Coimbra, MT;
Publication
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Abstract
Cardiac auscultation is one of the most cost-effective techniques used to detect and identify many heart conditions. Computer-assisted decision systems based on auscultation can support physicians in their decisions. Unfortunately, the application of such systems in clinical trials is still minimal since most of them only aim to detect the presence of extra or abnormal waves in the phonocardiogram signal, i.e., only a binary ground truth variable (normal vs abnormal) is provided. This is mainly due to the lack of large publicly available datasets, where a more detailed description of such abnormal waves (e.g., cardiac murmurs) exists. To pave the way to more effective research on healthcare recommendation systems based on auscultation, our team has prepared the currently largest pediatric heart sound dataset. A total of 5282 recordings have been collected from the four main auscultation locations of 1568 patients, in the process, 215780 heart sounds have been manually annotated. Furthermore, and for the first time, each cardiac murmur has been manually annotated by an expert annotator according to its timing, shape, pitch, grading, and quality. In addition, the auscultation locations where the murmur is present were identified as well as the auscultation location where the murmur is detected more intensively. Such detailed description for a relatively large number of heart sounds may pave the way for new machine learning algorithms with a real-world application for the detection and analysis of murmur waves for diagnostic purposes.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.