Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Cristiano Patrício received a B.Sc. in Computer Science and Engineering (17/20) in 2019 from the Polytechnic of Guarda and his M.Sc. in Computer Science and Engineering (18/20) in 2021 from the University of Beira Interior. He received 1 Merit Scholarship in the 2018/2019 academic year. Cristiano is pursuing his Ph.D. in Computer Science and Engineering from the University of Beira Interior under a Ph.D. research grant from the Portuguese national funding agency for science (FCT). Cristiano is currently a Research Assistant at INESC TEC and was a visiting assistant at the Polytechnic of Guarda in the academic year of 2021/2022. Previously, Cristiano participated in developing solutions for the Altice Portugal Foundation (MagicContact Web) projects and for the “Perception for a Service Robot” project of NOVA-LINCS. His work focuses on developing inherently interpretable deep learning models for pathology diagnosis in medical imaging. His research interests include the topics of Explainable AI, Deep Learning and Medical Image Analysis. He authored 6 research papers in international conferences and journals.

Interest
Topics
Details

Details

  • Name

    Cristiano Pires Patrício
  • Role

    Research Assistant
  • Since

    07th February 2022
Publications

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Authors
Patrício, C; Neves, C; Teixeira, F;

Publication
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

Towards Concept-Based Interpretability of Skin Lesion Diagnosis Using Vision-Language Models

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.

2023

Zero-shot face recognition: Improving the discriminability of visual face features using a Semantic-Guided Attention Model

Authors
Patricio, C; Neves, JC;

Publication
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
Zero-shot learning enables the recognition of classes not seen during training through the use of semantic information comprising a visual description of the class either in textual or attribute form. Despite the advances in the performance of zero-shot learning methods, most of the works do not explicitly exploit the correlation between the visual attributes of the image and their corresponding semantic attributes for learning discriminative visual features. In this paper, we introduce an attention-based strategy for deriving features from the image regions regarding the most prominent attributes of the image class. In particular, we train a Convolutional Neural Network (CNN) for image attribute prediction and use a gradient-weighted method for deriving the attention activation maps of the most salient image attributes. These maps are then incorporated into the feature extraction process of Zero-Shot Learning (ZSL) approaches for improving the discriminability of the features produced through the implicit inclusion of semantic information. For experimental validation, the performance of state-of-the-art ZSL methods was determined using features with and without the proposed attention model. Surprisingly, we discover that the proposed strategy degrades the performance of ZSL methods in classical ZSL datasets (AWA2), but it can significantly improve performance when using face datasets. Our experiments show that these results are a consequence of the interpretability of the dataset attributes, suggesting that existing ZSL datasets attributes are, in most cases, difficult to be identifiable in the image. Source code is available at https://github.com/CristianoPatricio/SGAM.

2023

Coherent Concept-based Explanations in Medical Image and Its Application to Skin Lesion Diagnosis

Authors
Patrício, C; Neves, JC; Teixeira, LF;

Publication
IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023 - Workshops, Vancouver, BC, Canada, June 17-24, 2023

Abstract
Early detection of melanoma is crucial for preventing severe complications and increasing the chances of successful treatment. Existing deep learning approaches for melanoma skin lesion diagnosis are deemed black-box models, as they omit the rationale behind the model prediction, compromising the trustworthiness and acceptability of these diagnostic methods. Attempts to provide concept-based explanations are based on post-hoc approaches, which depend on an additional model to derive interpretations. In this paper, we propose an inherently interpretable framework to improve the interpretability of concept-based models by incorporating a hard attention mechanism and a coherence loss term to assure the visual coherence of concept activations by the concept encoder, without requiring the supervision of additional annotations. The proposed framework explains its decision in terms of human-interpretable concepts and their respective contribution to the final prediction, as well as a visual interpretation of the locations where the concept is present in the image. Experiments on skin image datasets demonstrate that our method outperforms existing black-box and concept-based models for skin lesion classification. © 2023 IEEE.