Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis F. Teixeira é doutorado em Engenharia Electrotécnica e de Computadores pela Universidade do Porto na área de visão computacional (2009). Actualmente é Professor Auxiliar no Departamento de Engenharia Informática na Faculdade de Engenharia da Universidade do Porto e investigador no INESC TEC. Anteriormente foi investigador no INESC Porto (2001-2008), Visiting Researcher na University of Victoria (2006), e Senior Scientist no Fraunhofer AICOS (2008-2013). Os seus interesses de investigação actuais incluem: visão computacional, aprendizagem automática e sistemas interactivos.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Filipe Teixeira
  • Cargo

    Investigador Sénior
  • Desde

    17 setembro 2001
005
Publicações

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Autores
Patrício, C; Neves, C; Teixeira, F;

Publicação
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

Towards Concept-Based Interpretability of Skin Lesion Diagnosis Using Vision-Language Models

Autores
Patrício, C; Teixeira, LF; Neves, JC;

Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.

2024

Multimodal PointPillars for Efficient Object Detection in Autonomous Vehicles

Autores
Oliveira, M; Cerqueira, R; Pinto, JR; Fonseca, J; Teixeira, LF;

Publicação
IEEE Transactions on Intelligent Vehicles

Abstract

2024

On the Suitability of B-cos Networks for the Medical Domain

Autores
Torto, IR; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties. © 2024 IEEE.

2024

Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision~Boundary

Autores
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;

Publicação
CoRR

Abstract

Teses
supervisionadas

2023

Unconstrained Human Pose Estimation to Support Breast Cancer Survivor's Prospective Surveillance

Autor
João Pedro da Silva Monteiro

Instituição
UP-FEUP

2023

Self-Supervised Learning for Medical Image Classification: A Study on MoCo-CXR

Autor
Hugo Miguel Monteiro Guimarães

Instituição
UP-FEUP

2023

Learning to detect defects in industrial production lines from a few examples

Autor
André Filipe Vila Chã Afonso

Instituição
UP-FEUP

2023

Self-explanatory computer-aided diagnosis with limited supervision

Autor
Isabel Cristina Rio-Torto de Oliveira

Instituição
UP-FEUP

2023

Integrating Anatomical Prior Knowledge for Increased Generalisability in Breast Cancer Multi-center Data

Autor
Isabela Marques de Miranda

Instituição
UP-FEUP