Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Sobre

Sobre

Isabel Rio-Torto concluiu o mestrado em Engenharia Electrotécnica e de Computadores em 2019 pela Faculdade de Engenharia da Universidade do Porto (FEUP). Isabel é atualmente assistente de investigação no INESC TEC, associada ao Visual Computing and Machine Intelligence Group (VCMI), e está a obter o doutoramento em Ciência da Computação pela Faculdade de Ciências da Universidade do Porto (FCUP). Isabel é também Assistente Convidada na FEUP, lecionando cadeiras de programação. O seu trabalho está atualmente focado em "Self-explanatory computer-aided diagnosis with limited supervision".

Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Isabel Rio-Torto
  • Cargo

    Assistente de Investigação
  • Desde

    06 julho 2020
001
Publicações

2025

CBVLM: Training-free Explainable Concept-based Large Vision Language Models for Medical Image Classification

Autores
Patrício, C; Torto, IR; Cardoso, JS; Teixeira, LF; Neves, JC;

Publicação
CoRR

Abstract

2024

<i>DeViL</i>: Decoding Vision features into Language

Autores
Dani, M; Rio Torto, I; Alaniz, S; Akata, Z;

Publicação
PATTERN RECOGNITION, DAGM GCPR 2023

Abstract
Post-hoc explanation methods have often been criticised for abstracting away the decision-making process of deep neural networks. In this work, we would like to provide natural language descriptions for what different layers of a vision backbone have learned. Our DeViL method generates textual descriptions of visual features at different layers of the network as well as highlights the attribution locations of learned concepts. We train a transformer network to translate individual image features of any vision layer into a prompt that a separate off-the-shelf language model decodes into natural language. By employing dropout both per-layer and per-spatial-location, our model can generalize training on image-text pairs to generate localized explanations. As it uses a pre-trained language model, our approach is fast to train and can be applied to any vision backbone. Moreover, DeViL can create open-vocabulary attribution maps corresponding to words or phrases even outside the training scope of the vision model. We demonstrate that DeViL generates textual descriptions relevant to the image content on CC3M, surpassing previous lightweight captioning models and attribution maps, uncovering the learned concepts of the vision backbone. Further, we analyze fine-grained descriptions of layers as well as specific spatial locations and show that DeViL outperforms the current state-of-the-art on the neuron-wise descriptions of the MILANNOTATIONS dataset.

2024

ON THE SUITABILITY OF B-COS NETWORKS FOR THE MEDICAL DOMAIN

Autores
Rio-Torto, I; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publicação
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties.

2024

Parameter-Efficient Generation of Natural Language Explanations for Chest X-ray Classification

Autores
Torto, IR; Cardoso, JS; Teixeira, LF;

Publicação
Medical Imaging with Deep Learning, 3-5 July 2024, Paris, France.

Abstract

2023

Fill in the blank for fashion complementary outfit product Retrieval: VISUM summer school competition

Autores
Castro, E; Ferreira, PM; Rebelo, A; Rio-Torto, I; Capozzi, L; Ferreira, MF; Goncalves, T; Albuquerque, T; Silva, W; Afonso, C; Sousa, RG; Cimarelli, C; Daoudi, N; Moreira, G; Yang, HY; Hrga, I; Ahmad, J; Keswani, M; Beco, S;

Publicação
MACHINE VISION AND APPLICATIONS

Abstract
Every year, the VISion Understanding and Machine intelligence (VISUM) summer school runs a competition where participants can learn and share knowledge about Computer Vision and Machine Learning in a vibrant environment. 2021 VISUM's focused on applying those methodologies in fashion. Recently, there has been an increase of interest within the scientific community in applying computer vision methodologies to the fashion domain. That is highly motivated by fashion being one of the world's largest industries presenting a rapid development in e-commerce mainly since the COVID-19 pandemic. Computer Vision for Fashion enables a wide range of innovations, from personalized recommendations to outfit matching. The competition enabled students to apply the knowledge acquired in the summer school to a real-world problem. The ambition was to foster research and development in fashion outfit complementary product retrieval by leveraging vast visual and textual data with domain knowledge. For this, a new fashion outfit dataset (acquired and curated by FARFETCH) for research and benchmark purposes is introduced. Additionally, a competitive baseline with an original negative sampling process for triplet mining was implemented and served as a starting point for participants. The top 3 performing methods are described in this paper since they constitute the reference state-of-the-art for this particular problem. To our knowledge, this is the first challenge in fashion outfit complementary product retrieval. Moreover, this joint project between academia and industry brings several relevant contributions to disseminating science and technology, promoting economic and social development, and helping to connect early-career researchers to real-world industry challenges.

Teses
supervisionadas

2023

Self-Supervised Learning for Medical Image Classification: A Study on MoCo-CXR

Autor
Hugo Miguel Monteiro Guimarães

Instituição
UM

2023

Improving Image Captioning through Segmentation

Autor
Pedro Daniel Fernandes Ferreira

Instituição
UM

2021

Combining simulated and real images in deep learning

Autor
Pedro Xavier Tavares Monteiro Correia de Pinho

Instituição
UM

2020

Automatic generation of textual explanations in deep learning

Autor
Patrícia Ferreira Rocha

Instituição
UM