2023
Autores
Graziani, M; Dutkiewicz, L; Calvaresi, D; Amorim, JP; Yordanova, K; Vered, M; Nair, R; Abreu, PH; Blanke, T; Pulignano, V; Prior, JO; Lauwaert, L; Reijers, W; Depeursinge, A; Andrearczyk, V; Müller, H;
Publicação
Artif. Intell. Rev.
Abstract
2023
Autores
Santos, MS; Abreu, PH; Japkowicz, N; Fernandez, A; Santos, J;
Publicação
INFORMATION FUSION
Abstract
The combination of class imbalance and overlap is currently one of the most challenging issues in machine learning. While seminal work focused on establishing class overlap as a complicating factor for classification tasks in imbalanced domains, ongoing research mostly concerns the study of their synergy over real-word applications. However, given the lack of a well-formulated definition and measurement of class overlap in real-world domains, especially in the presence of class imbalance, the research community has not yet reached a consensus on the characterisation of both problems. This naturally complicates the evaluation of existing approaches to address these issues simultaneously and prevents future research from moving towards the devise of specialised solutions. In this work, we advocate for a unified view of the problem of class overlap in imbalanced domains. Acknowledging class overlap as the overarching problem - since it has proven to be more harmful for classification tasks than class imbalance - we start by discussing the key concepts associated to its definition, identification, and measurement in real-world domains, while advocating for a characterisation of the problem that attends to multiple sources of complexity. We then provide an overview of existing data complexity measures and establish the link to what specific types of class overlap problems these measures cover, proposing a novel taxonomy of class overlap complexity measures. Additionally, we characterise the relationship between measures, the insights they provide, and discuss to what extent they account for class imbalance. Finally, we systematise the current body of knowledge on the topic across several branches of Machine Learning (Data Analysis, Data Preprocessing, Algorithm Design, and Meta-learning), identifying existing limitations and discussing possible lines for future research.
2023
Autores
Amorim, JP; Abreu, PH; Santos, J; Cortes, M; Vila, V;
Publicação
INFORMATION PROCESSING & MANAGEMENT
Abstract
Deep Learning has reached human-level performance in several medical tasks including clas-sification of histopathological images. Continuous effort has been made at finding effective strategies to interpret these types of models, among them saliency maps, which depict the weights of the pixels on the classification as an heatmap of intensity values, have been by far the most used for image classification. However, there is a lack of tools for the systematic evaluation of saliency maps, and existing works introduce non-natural noise such as random or uniform values. To address this issue, we propose an approach to evaluate the faithfulness of the saliency maps by introducing natural perturbations in the image, based on oppose-class substitution, and studying their impact on evaluation metrics adapted from saliency models. We validate the proposed approach on a breast cancer metastases detection dataset PatchCamelyon with 327,680 patches of histopathological images of sentinel lymph node sections. Results show that GradCAM, Guided-GradCAM and gradient-based saliency map methods are sensitive to natural perturbations and correlate to the presence of tumor evidence in the image. Overall, this approach proves to be a solution for the validation of saliency map methods without introducing confounding variables and shows potential for application on other medical imaging tasks.
2023
Autores
Amorim, JP; Abreu, PH; Fernandez, A; Reyes, M; Santos, J; Abreu, MH;
Publicação
IEEE REVIEWS IN BIOMEDICAL ENGINEERING
Abstract
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using deep learning techniques, interpretability and oncology as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
2023
Autores
Salazar, T; Fernandes, M; Araújo, H; Abreu, PH;
Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Abstract
2023
Autores
Amorim, JP; Abreu, PH; Fernandez, A; Reyes, M; Santos, J; Abreu, MH;
Publicação
IEEE REVIEWS IN BIOMEDICAL ENGINEERING
Abstract
Healthcare agents, in particular in the oncology field, are currently collecting vast amounts of diverse patient data. In this context, some decision-support systems, mostly based on deep learning techniques, have already been approved for clinical purposes. Despite all the efforts in introducing artificial intelligence methods in the workflow of clinicians, its lack of interpretability - understand how the methods make decisions - still inhibits their dissemination in clinical practice. The aim of this article is to present an easy guide for oncologists explaining how these methods make decisions and illustrating the strategies to explain them. Theoretical concepts were illustrated based on oncological examples and a literature review of research works was performed from PubMed between January 2014 to September 2020, using deep learning techniques, interpretability and oncology as keywords. Overall, more than 60% are related to breast, skin or brain cancers and the majority focused on explaining the importance of tumor characteristics (e.g. dimension, shape) in the predictions. The most used computational methods are multilayer perceptrons and convolutional neural networks. Nevertheless, despite being successfully applied in different cancers scenarios, endowing deep learning techniques with interpretability, while maintaining their performance, continues to be one of the greatest challenges of artificial intelligence.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.