Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por CTM

2024

An End-to-End Framework to Classify and Generate Privacy-Preserving Explanations in Pornography Detection

Autores
Vieira, M; Goncalves, T; Silva, W; Sequeira, F;

Publicação
BIOSIG 2024 - Proceedings of the 23rd International Conference of the Biometrics Special Interest Group

Abstract
The proliferation of explicit material online, particularly pornography, has emerged as a paramount concern in our society. While state-of-the-art pornography detection models already show some promising results, their decision-making processes are often opaque, raising ethical issues. This study focuses on uncovering the decision-making process of such models, specifically fine-tuned convolutional neural networks and transformer architectures. We compare various explainability techniques to illuminate the limitations, potential improvements, and ethical implications of using these algorithms. Results show that models trained on diverse and dynamic datasets tend to have more robustness and generalisability when compared to models trained on static datasets. Additionally, transformer models demonstrate superior performance and generalisation compared to convolutional ones. Furthermore, we implemented a privacy-preserving framework during explanation retrieval, which contributes to developing secure and ethically sound biometric applications. © 2024 IEEE.

2024

Assessing the Impact of Federated Learning and Differential Privacy on Multi-centre Polyp Segmentation

Autores
Stelter L.; Corbetta V.; Beets-Tan R.; Silva W.;

Publicação
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS

Abstract
Federated Learning (FL) is emerging in the medical field to address the need for diverse datasets while complying with data protection regulations. This decentralised learning paradigm allows hospitals (clients) to train machine learning models locally, ensuring that patient data remains within the confines of its originating institution. Nonetheless, FL by itself is not enough to guarantee privacy, as the central aggregation process may still be susceptible to identity-exposing attacks, potentially compromising data protection compliance. To strengthen privacy, differential privacy (DP) is often introduced. In this work, we conduct a comprehensive comparative analysis to evaluate the impact of DP in both traditional Centralised Learning (CL) frameworks and FL for polyp segmentation, a common medical image analysis task. Experiments are performed in PolypGen, a multi-centre publicly available dataset designed for polyp segmentation. The results show a clear drop in performance with the introduction of DP, exposing the trade-off between privacy and performance and highlighting the need to develop novel privacy-preserving techniques.

2024

Towards Case-based Interpretability for Medical Federated Learning

Autores
Latorre, L; Petrychenko, L; Beets Tan, R; Kopytova, T; Silva, W;

Publicação
Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS

Abstract
We explore deep generative models to generate case-based explanations in a medical federated learning setting. Explaining AI model decisions through case-based interpretability is paramount to increasing trust and allowing widespread adoption of AI in clinical practice. However, medical AI training paradigms are shifting towards federated learning settings in order to comply with data protection regulations. In a federated scenario, past data is inaccessible to the current user. Thus, we use a deep generative model to generate synthetic examples that protect privacy and explain decisions. Our proof-of-concept focuses on pleural effusion diagnosis and uses publicly available Chest X-ray data. © 2024 IEEE.

2024

Enhancing Cross-Modal Medical Image Segmentation Through Compositionality

Autores
Eijpe, A; Corbetta, V; Chupetlovska, K; Beets-Tan, R; Silva, W;

Publicação
Lecture Notes in Computer Science - Deep Generative Models

Abstract

2024

Massively Annotated Datasets for Assessment of Synthetic and Real Data in Face Recognition

Autores
Neto, PC; Mamede, RM; Albuquerque, C; Gonçalves, T; Sequeira, AF;

Publicação
2024 IEEE 18TH INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION, FG 2024

Abstract
Face recognition applications have grown in parallel with the size of datasets, complexity of deep learning models and computational power. However, while deep learning models evolve to become more capable and computational power keeps increasing, the datasets available are being retracted and removed from public access. Privacy and ethical concerns are relevant topics within these domains. Through generative artificial intelligence, researchers have put efforts into the development of completely synthetic datasets that can be used to train face recognition systems. Nonetheless, the recent advances have not been sufficient to achieve performance comparable to the state-of-the-art models trained on real data. To study the drift between the performance of models trained on real and synthetic datasets, we leverage a massive attribute classifier (MAC) to create annotations for four datasets: two real and two synthetic. From these annotations, we conduct studies on the distribution of each attribute within all four datasets. Additionally, we further inspect the differences between real and synthetic datasets on the attribute set. When comparing through the Kullback-Leibler divergence we have found differences between real and synthetic samples. Interestingly enough, we have verified that while real samples suffice to explain the synthetic distribution, the opposite could not be further from being true.

2024

Fairness Under Cover: Evaluating the Impact of Occlusions on Demographic Bias in Facial Recognition

Autores
Mamede, RM; Neto, PC; Sequeira, AF;

Publicação
CoRR

Abstract

  • 19
  • 315