Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis F. Teixeira é doutorado em Engenharia Electrotécnica e de Computadores pela Universidade do Porto na área de visão computacional (2009). Actualmente é Professor Associado no Departamento de Engenharia Informática na Faculdade de Engenharia da Universidade do Porto e investigador no INESC TEC. Anteriormente foi investigador no INESC Porto (2001-2008), Visiting Researcher na University of Victoria (2006), e Senior Scientist no Fraunhofer AICOS (2008-2013). Os seus interesses de investigação actuais incluem visão computacional, aprendizagem automática e sistemas interactivos.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Filipe Teixeira
  • Cargo

    Investigador Sénior
  • Desde

    17 setembro 2001
005
Publicações

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Autores
Patrício, C; Neves, C; Teixeira, F;

Publicação
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

Towards Concept-Based Interpretability of Skin Lesion Diagnosis Using Vision-Language Models

Autores
Patrício, C; Teixeira, LF; Neves, JC;

Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation. © 2024 IEEE.

2024

Multimodal PointPillars for Efficient Object Detection in Autonomous Vehicles

Autores
Oliveira M.; Cerqueira R.; Pinto J.R.; Fonseca J.; Teixeira L.F.;

Publicação
IEEE Transactions on Intelligent Vehicles

Abstract
Autonomous Vehicles aim to understand their surrounding environment by detecting relevant objects in the scene, which can be performed using a combination of sensors. The accurate prediction of pedestrians is a particularly challenging task, since the existing algorithms have more difficulty detecting small objects. This work studies and addresses this often overlooked problem by proposing Multimodal PointPillars (M-PP), a fast and effective novel fusion architecture for 3D object detection. Inspired by both MVX-Net and PointPillars, image features from a 2D CNN-based feature map are fused with the 3D point cloud in an early fusion architecture. By changing the heavy 3D convolutions of MVX-Net to a set of convolutional layers in 2D space, along with combining LiDAR and image information at an early stage, M-PP considerably improves inference time over the baseline, running at 28.49 Hz. It achieves inference speeds suitable for real-world applications while keeping the high performance of multimodal approaches. Extensive experiments show that our proposed architecture outperforms both MVX-Net and PointPillars for the pedestrian class in the KITTI 3D object detection dataset, with 62.78% in $AP_{BEV}$ (moderate difficulty), while also outperforming MVX-Net in the nuScenes dataset. Moreover, experiments were conducted to measure the detection performance based on object distance. The performance of M-PP surpassed other methods in pedestrian detection at any distance, particularly for faraway objects (more than 30 meters). Qualitative analysis shows that M-PP visibly outperformed MVX-Net for pedestrians and cyclists, while simultaneously making accurate predictions of cars.

2024

On the Suitability of B-cos Networks for the Medical Domain

Autores
Torto, IR; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publicação
IEEE International Symposium on Biomedical Imaging, ISBI 2024, Athens, Greece, May 27-30, 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties. © 2024 IEEE.

2024

Finding Patterns in Ambiguity: Interpretable Stress Testing in the Decision~Boundary

Autores
Gomes, I; Teixeira, LF; van Rijn, JN; Soares, C; Restivo, A; Cunha, L; Santos, M;

Publicação
CoRR

Abstract

Teses
supervisionadas

2023

Human Action Evaluation applied to Weightlifting

Autor
Argus Luconi Rosenhaim

Instituição
UP-FEUP

2023

Unconstrained Human Pose Estimation to Support Breast Cancer Survivor's Prospective Surveillance

Autor
João Pedro da Silva Monteiro

Instituição
UP-FEUP

2023

Disentanglement Representation Learning for Generalizability in Medical Multi-center Data

Autor
Daniel José Barros da Silva

Instituição
UP-FEUP

2023

Self-Supervised Learning for Medical Image Classification: A Study on MoCo-CXR

Autor
Hugo Miguel Monteiro Guimarães

Instituição
UP-FEUP

2023

Self-explanatory computer-aided diagnosis with limited supervision

Autor
Isabel Cristina Rio-Torto de Oliveira

Instituição
UP-FEUP