Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Luis F. Teixeira holds a Ph.D. in Electrical and Computer Engineering from Universidade do Porto in the area of computer vision (2009). Currently he is an Assistant Professor at the Department of Informatics Engineering, Faculdade de Engenharia da Universidade do Porto, and a researcher at INESC TEC. Previously he was a researcher at INESC Porto (2001-2008), Visiting Researcher at the University of Victoria (2006), and Senior Scientist at Fraunhofer AICOS (2008-2013). His current research interest include: computer vision, machine learning and interactive systems.

Interest
Topics
Details

Details

  • Name

    Luís Filipe Teixeira
  • Role

    Senior Researcher
  • Since

    17th September 2001
005
Publications

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Authors
Patrício, C; Neves, C; Teixeira, F;

Publication
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

Towards Concept-Based Interpretability of Skin Lesion Diagnosis Using Vision-Language Models

Authors
Patrício, C; Teixeira, LF; Neves, JC;

Publication
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.

2024

Multimodal PointPillars for Efficient Object Detection in Autonomous Vehicles

Authors
Oliveira, M; Cerqueira, R; Pinto, JR; Fonseca, J; Teixeira, LF;

Publication
IEEE Transactions on Intelligent Vehicles

Abstract

2024

On the Suitability of B-cos Networks for the Medical Domain

Authors
Torto, IR; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publication
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties. © 2024 IEEE.

2024

Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision~Boundary

Authors
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;

Publication
CoRR

Abstract

Supervised
thesis

2023

Uncertainty-Driven Out-of-Distribution Detection in 3D LiDAR Object Detection for Autonomous Driving

Author
José António Barbosa da Fonseca Guerra

Institution
UP-FEUP

2023

Deep learning lifecycle management - an application to automatic inspection in industrial production lines

Author
Diogo Filipe de Oliveira Santos

Institution
UP-FEUP

2023

Improving Image Captioning through Segmentation

Author
Pedro Daniel Fernandes Ferreira

Institution
UP-FEUP

2023

Optimization of Color Adjustment in the Ceramic Industry using Genetic Algorithms

Author
Ricardo Daniel Quintas de Jesus Silva

Institution
UP-FEUP

2023

Learning to detect defects in industrial production lines from a few examples

Author
André Filipe Vila Chã Afonso

Institution
UP-FEUP