Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2023

Evaluating the Performance of Explanation Methods on Ordinal Regression CNN Models

Authors
Barbero-Gómez, J; Cruz, R; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;

Publication
ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II

Abstract
This paper introduces an evaluation procedure to validate the efficacy of explanation methods for Convolutional Neural Network (CNN) models in ordinal regression tasks. Two ordinal methods are contrasted against a baseline using cross-entropy, across four datasets. A statistical analysis demonstrates that attribution methods, such as Grad-CAM and IBA, perform significantly better when used with ordinal regression CNN models compared to a baseline approach in most ordinal and nominal metrics. The study suggests that incorporating ordinal information into the attribution map construction process may improve the explanations further.

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Authors
Neto, PC; Caldeira, E; Cardoso, JS; Sequeira, AF;

Publication
International Conference of the Biometrics Special Interest Group, BIOSIG 2023, Darmstadt, Germany, September 20-22, 2023

Abstract

2023

Detecting Concepts and Generating Captions from Medical Images: Contributions of the VCMI Team to ImageCLEFmedical Caption 2023

Authors
Torto, IR; Patrício, C; Montenegro, H; Gonçalves, T; Cardoso, JS;

Publication
Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023.

Abstract
This paper presents the main contributions of the VCMI Team to the ImageCLEFmedical Caption 2023 task. We addressed both the concept detection and caption prediction tasks. Regarding concept detection, our team employed different approaches to assign concepts to medical images: multi-label classification, adversarial training, autoregressive modelling, image retrieval, and concept retrieval. We also developed three model ensembles merging the results of some of the proposed methods. Our best submission obtained an F1-score of 0.4998, ranking 3rd among nine teams. Regarding the caption prediction task, our team explored two main approaches based on image retrieval and language generation. The language generation approaches, based on a vision model as the encoder and a language model as the decoder, yielded the best results, allowing us to rank 5th among thirteen teams, with a BERTScore of 0.6147. © 2023 Copyright for this paper by its authors.

2023

Transformer-Based Multi-Prototype Approach for Diabetic Macular Edema Analysis in OCT Images

Authors
Vidal, PL; Moura, Jd; Novo, J; Ortega, M; Cardoso, JS;

Publication
IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023

Abstract
Optical Coherence Tomography (OCT) is the major diagnostic tool for the leading cause of blindness in developed countries: Diabetic Macular Edema (DME). Depending on the type of fluid accumulations, different treatments are needed. In particular, Cystoid Macular Edemas (CMEs) represent the most severe scenario, while Diffuse Retinal Thickening (DRT) is an early indicator of the disease but a challenging scenario to detect. While methodologies exist, their explanatory power is limited to the input sample itself. However, due to the complexity of these accumulations, this may not be enough for a clinician to assess the validity of the classification. Thus, in this work, we propose a novel approach based on multi-prototype networks with vision transformers to obtain an example-based explainable classification. Our proposal achieved robust results in two representative OCT devices, with a mean accuracy of 0.9099 ± 0.0083 and 0.8582 ± 0.0126 for CME and DRT-type fluid accumulations, respectively. © 2023 IEEE.

2023

Unmasking Biases and Navigating Pitfalls in the Ophthalmic Artificial Intelligence Lifecycle: A Review

Authors
Nakayama, LF; Matos, J; Quion, J; Novaes, F; Mitchell, WG; Mwavu, R; Ji Hung, JY; dy Santiago, AP; Phanphruk, W; Cardoso, JS; Celi, LA;

Publication
CoRR

Abstract

2023

Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition

Authors
Neto P.C.; Caldeira E.; Cardoso J.S.; Sequeira A.F.;

Publication
BIOSIG 2023 - Proceedings of the 22nd International Conference of the Biometrics Special Interest Group

Abstract
With the ever-growing complexity of deep learning models for face recognition, it becomes hard to deploy these systems in real life. Researchers have two options: 1) use smaller models; 2) compress their current models. Since the usage of smaller models might lead to concerning biases, compression gains relevance. However, compressing might be also responsible for an increase in the bias of the final model. We investigate the overall performance, the performance on each ethnicity subgroup and the racial bias of a State-of-the-Art quantization approach when used with synthetic and real data. This analysis provides a few more details on potential benefits of performing quantization with synthetic data, for instance, the reduction of biases on the majority of test scenarios. We tested five distinct architectures and three different training datasets. The models were evaluated on a fourth dataset which was collected to infer and compare the performance of face recognition models on different ethnicity.

  • 16
  • 322