Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Wilson Santos Silva
  • Cargo

    Investigador Colaborador Externo
  • Desde

    15 fevereiro 2016
003
Publicações

2025

FedGS: Federated Gradient Scaling for Heterogeneous Medical Image Segmentation

Autores
Schutte, P; Corbetta, V; Beets-Tan, R; Silva, W;

Publicação
Lecture Notes in Computer Science - Medical Image Computing and Computer Assisted Intervention – MICCAI 2024 Workshops

Abstract

2025

Multi-task Learning Approach for Intracranial Hemorrhage Prognosis

Autores
Cobo, M; del Barrio, AP; Fernández Miranda, PM; Bellón, PS; Iglesias, LL; Silva, W;

Publicação
MACHINE LEARNING IN MEDICAL IMAGING, PT II, MLMI 2024

Abstract
Prognosis after intracranial hemorrhage (ICH) is influenced by a complex interplay between imaging and tabular data. Rapid and reliable prognosis are crucial for effective patient stratification and informed treatment decision-making. In this study, we aim to enhance image-based prognosis by learning a robust feature representation shared between prognosis and the clinical and demographic variables most highly correlated with it. Our approach mimics clinical decision-making by reinforcing the model to learn valuable prognostic data embedded in the image. We propose a 3D multi-task image model to predict prognosis, Glasgow Coma Scale and age, improving accuracy and interpretability. Our method outperforms current state-of-the-art baseline image models, and demonstrates superior performance in ICH prognosis compared to four board-certified neuroradiologists using only CT scans as input. We further validate our model with interpretability saliency maps. Code is available at https://github.com/MiriamCobo/MultitaskLearning_ICH_Prognosis.git.

2024

Latent diffusion models for Privacy-preserving Medical Case-based Explanations

Autores
Campos, F; Petrychenko, L; Teixeira, LF; Silva, W;

Publicação
Proceedings of the First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED 2024) co-located with 27th European Conference on Artificial Intelligence (ECAI 2024), Santiago de Compostela, Spain, October 20, 2024.

Abstract
Deep-learning techniques can improve the efficiency of medical diagnosis while challenging human experts’ accuracy. However, the rationale behind these classifier’s decisions is largely opaque, which is dangerous in sensitive applications such as healthcare. Case-based explanations explain the decision process behind these mechanisms by exemplifying similar cases using previous studies from other patients. Yet, these may contain personally identifiable information, which makes them impossible to share without violating patients’ privacy rights. Previous works have used GANs to generate anonymous case-based explanations, which had limited visual quality. We solve this issue by employing a latent diffusion model in a three-step procedure: generating a catalogue of synthetic images, removing the images that closely resemble existing patients, and using this anonymous catalogue during an explanation retrieval process. We evaluate the proposed method on the MIMIC-CXR-JPG dataset and achieve explanations that simultaneously have high visual quality, are anonymous, and retain their explanatory value.

2024

An End-to-End Framework to Classify and Generate Privacy-Preserving Explanations in Pornography Detection

Autores
Vieira, M; Goncalves, T; Silva, W; Sequeira, F;

Publicação
BIOSIG 2024 - Proceedings of the 23rd International Conference of the Biometrics Special Interest Group

Abstract
The proliferation of explicit material online, particularly pornography, has emerged as a paramount concern in our society. While state-of-the-art pornography detection models already show some promising results, their decision-making processes are often opaque, raising ethical issues. This study focuses on uncovering the decision-making process of such models, specifically fine-tuned convolutional neural networks and transformer architectures. We compare various explainability techniques to illuminate the limitations, potential improvements, and ethical implications of using these algorithms. Results show that models trained on diverse and dynamic datasets tend to have more robustness and generalisability when compared to models trained on static datasets. Additionally, transformer models demonstrate superior performance and generalisation compared to convolutional ones. Furthermore, we implemented a privacy-preserving framework during explanation retrieval, which contributes to developing secure and ethically sound biometric applications. © 2024 IEEE.

2024

Anatomical Concept-based Pseudo-labels for Increased Generalizability in Breast Cancer Multi-center Data

Autores
Miranda, I; Agrotis, G; Tan, RB; Teixeira, LF; Silva, W;

Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024

Abstract
Breast cancer, the most prevalent cancer among women, poses a significant healthcare challenge, demanding effective early detection for optimal treatment outcomes. Mammography, the gold standard for breast cancer detection, employs low-dose X-rays to reveal tissue details, particularly cancerous masses and calcium deposits. This work focuses on evaluating the impact of incorporating anatomical knowledge to improve the performance and robustness of a breast cancer classification model. In order to achieve this, a methodology was devised to generate anatomical pseudo-labels, simulating plausible anatomical variations in cancer masses. These variations, encompassing changes in mass size and intensity, closely reflect concepts from the BI-RADs scale. Besides anatomical-based augmentation, we propose a novel loss term promoting the learning of cancer grading by our model. Experiments were conducted on publicly available datasets simulating both in-distribution and out-of-distribution scenarios to thoroughly assess the model's performance under various conditions.

Teses
supervisionadas

2022

Towards Biometrically-Morphed Medical Case-based Explanations

Autor
Maria Manuel Domingos Carvalho

Instituição
UM

2022

Biomedical Multimodal Explanations – Increasing Diversity and Complementarity in Explainable Artificial Intelligence

Autor
Diogo Baptista Martins da Mata

Instituição
UM

2021

A privacy-preserving framework for case-based interpretability in machine learning

Autor
Maria Helena Sampaio de Mendonça Montenegro e Almeida

Instituição
UM