Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Luis F. Teixeira é doutorado em Engenharia Electrotécnica e de Computadores pela Universidade do Porto na área de visão computacional (2009). Actualmente é Professor Associado no Departamento de Engenharia Informática na Faculdade de Engenharia da Universidade do Porto e investigador no INESC TEC. Anteriormente foi investigador no INESC Porto (2001-2008), Visiting Researcher na University of Victoria (2006), e Senior Scientist no Fraunhofer AICOS (2008-2013). Os seus interesses de investigação actuais incluem visão computacional, aprendizagem automática e sistemas interactivos.

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Luís Filipe Teixeira
  • Cargo

    Investigador Sénior
  • Desde

    17 setembro 2001
005
Publicações

2025

Markerless multi-view 3D human pose estimation: A survey

Autores
Nogueira, AFR; Oliveira, HP; Teixeira, LF;

Publicação
IMAGE AND VISION COMPUTING

Abstract
3D human pose estimation aims to reconstruct the human skeleton of all the individuals in a scene by detecting several body joints. The creation of accurate and efficient methods is required for several real-world applications including animation, human-robot interaction, surveillance systems or sports, among many others. However, several obstacles such as occlusions, random camera perspectives, or the scarcity of 3D labelled data, have been hampering the models' performance and limiting their deployment in real-world scenarios. The higher availability of cameras has led researchers to explore multi-view solutions due to the advantage of being able to exploit different perspectives to reconstruct the pose. Most existing reviews focus mainly on monocular 3D human pose estimation and a comprehensive survey only on multi-view approaches to determine the 3D pose has been missing since 2012. Thus, the goal of this survey is to fill that gap and present an overview of the methodologies related to 3D pose estimation in multi-view settings, understand what were the strategies found to address the various challenges and also, identify their limitations. According to the reviewed articles, it was possible to find that most methods are fully-supervised approaches based on geometric constraints. Nonetheless, most of the methods suffer from 2D pose mismatches, to which the incorporation of temporal consistency and depth information have been suggested to reduce the impact of this limitation, besides working directly with 3D features can completely surpass this problem but at the expense of higher computational complexity. Models with lower supervision levels were identified to overcome some of the issues related to 3D pose, particularly the scarcity of labelled datasets. Therefore, no method is yet capable of solving all the challenges associated with the reconstruction of the 3D pose. Due to the existing trade-off between complexity and performance, the best method depends on the application scenario. Therefore, further research is still required to develop an approach capable of quickly inferring a highly accurate 3D pose with bearable computation cost. To this goal, techniques such as active learning, methods that learn with a low level of supervision, the incorporation of temporal consistency, view selection, estimation of depth information and multi-modal approaches might be interesting strategies to keep in mind when developing a new methodology to solve this task.

2025

A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis

Autores
Patrício, C; Teixeira, LF; Neves, JC;

Publicação
COMPUTATIONAL AND STRUCTURAL BIOTECHNOLOGY JOURNAL

Abstract
The main challenges hindering the adoption of deep learning-based systems in clinical settings are the scarcity of annotated data and the lack of interpretability and trust in these systems. Concept Bottleneck Models (CBMs) offer inherent interpretability by constraining the final disease prediction on a set of human-understandable concepts. However, this inherent interpretability comes at the cost of greater annotation burden. Additionally, adding new concepts requires retraining the entire system. In this work, we introduce a novel two-step methodology that addresses both of these challenges. By simulating the two stages of a CBM, we utilize a pretrained Vision Language Model (VLM) to automatically predict clinical concepts, and an off-the-shelf Large Language Model (LLM) to generate disease diagnoses grounded on the predicted concepts. Furthermore, our approach supports test-time human intervention, enabling corrections to predicted concepts, which improves final diagnoses and enhances transparency in decision-making. We validate our approach on three skin lesion datasets, demonstrating that it outperforms traditional CBMs and state-of-the-art explainable methods, all without requiring any training and utilizing only a few annotated examples. The code is available at https://github.com/CristianoPatricio/2step-concept-based-skin-diagnosis.

2025

CBVLM: Training-free Explainable Concept-based Large Vision Language Models for Medical Image Classification

Autores
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, JC;

Publicação
CoRR

Abstract

2024

Explainable Deep Learning Methods in Medical Image Classification: A Survey

Autores
Patrício, C; Neves, C; Teixeira, F;

Publicação
ACM COMPUTING SURVEYS

Abstract
The remarkable success of deep learning has prompted interest in its application to medical imaging diagnosis. Even though state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box nature of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical imaging diagnosis, including visual, textual, example-based and concept-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations. In addition, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging and the future research directions on the topic are discussed.

2024

TOWARDS CONCEPT-BASED INTERPRETABILITY OF SKIN LESION DIAGNOSIS USING VISION-LANGUAGE MODELS

Autores
Patricio, C; Teixeira, LF; Neves, JC;

Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.

Teses
supervisionadas

2023

Unconstrained Human Pose Estimation to Support Breast Cancer Survivor's Prospective Surveillance

Autor
João Pedro da Silva Monteiro

Instituição
UP-FEUP

2023

Self-Supervised Learning for Medical Image Classification: A Study on MoCo-CXR

Autor
Hugo Miguel Monteiro Guimarães

Instituição
UP-FEUP

2023

Learning to detect defects in industrial production lines from a few examples

Autor
André Filipe Vila Chã Afonso

Instituição
UP-FEUP

2023

Self-explanatory computer-aided diagnosis with limited supervision

Autor
Isabel Cristina Rio-Torto de Oliveira

Instituição
UP-FEUP

2023

Integrating Anatomical Prior Knowledge for Increased Generalisability in Breast Cancer Multi-center Data

Autor
Isabela Marques de Miranda

Instituição
UP-FEUP