2024
Authors
Gonçalves, T; Arias, DP; Willett, J; Hoebel, KV; Cleveland, MC; Ahmed, SR; Gerstner, ER; Cramer, JK; Cardoso, JS; Bridge, CP; Kim, AE;
Publication
CoRR
Abstract
2024
Authors
Patrício, C; Neves, C; Teixeira, F;
Publication
ACM COMPUTING SURVEYS
Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.
2024
Authors
Patrício, C; Teixeira, LF; Neves, JC;
Publication
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024
Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.
2024
Authors
Oliveira, M; Cerqueira, R; Pinto, JR; Fonseca, J; Teixeira, LF;
Publication
IEEE Transactions on Intelligent Vehicles
Abstract
2024
Authors
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;
Publication
CoRR
Abstract
2024
Authors
Rodrigues Nogueira, AF; Oliveira, HP; Teixeira, LF;
Publication
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.