2023
Autores
Matos, J; Struja, T; Gallifant, J; Nakayama, LF; Charpignon, M; Liu, X; Economou-Zavlanos, N; Cardoso, JS; Johnson, KS; Bhavsar, N; Gichoya, JW; Celi, LA; Wong, AI;
Publicação
Abstract
2023
Autores
Barbero-Gómez, J; Cruz, R; Cardoso, JS; Gutiérrez, PA; Hervás-Martínez, C;
Publicação
ADVANCES IN COMPUTATIONAL INTELLIGENCE, IWANN 2023, PT II
Abstract
This paper introduces an evaluation procedure to validate the efficacy of explanation methods for Convolutional Neural Network (CNN) models in ordinal regression tasks. Two ordinal methods are contrasted against a baseline using cross-entropy, across four datasets. A statistical analysis demonstrates that attribution methods, such as Grad-CAM and IBA, perform significantly better when used with ordinal regression CNN models compared to a baseline approach in most ordinal and nominal metrics. The study suggests that incorporating ordinal information into the attribution map construction process may improve the explanations further.
2023
Autores
Neto, PC; Caldeira, E; Cardoso, JS; Sequeira, AF;
Publicação
International Conference of the Biometrics Special Interest Group, BIOSIG 2023, Darmstadt, Germany, September 20-22, 2023
Abstract
2023
Autores
Torto, IR; Patrício, C; Montenegro, H; Gonçalves, T; Cardoso, JS;
Publicação
Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2023), Thessaloniki, Greece, September 18th to 21st, 2023.
Abstract
This paper presents the main contributions of the VCMI Team to the ImageCLEFmedical Caption 2023 task. We addressed both the concept detection and caption prediction tasks. Regarding concept detection, our team employed different approaches to assign concepts to medical images: multi-label classification, adversarial training, autoregressive modelling, image retrieval, and concept retrieval. We also developed three model ensembles merging the results of some of the proposed methods. Our best submission obtained an F1-score of 0.4998, ranking 3rd among nine teams. Regarding the caption prediction task, our team explored two main approaches based on image retrieval and language generation. The language generation approaches, based on a vision model as the encoder and a language model as the decoder, yielded the best results, allowing us to rank 5th among thirteen teams, with a BERTScore of 0.6147. © 2023 Copyright for this paper by its authors.
2023
Autores
Vidal, PL; Moura, Jd; Novo, J; Ortega, M; Cardoso, JS;
Publicação
IEEE International Conference on Acoustics, Speech and Signal Processing ICASSP 2023, Rhodes Island, Greece, June 4-10, 2023
Abstract
Optical Coherence Tomography (OCT) is the major diagnostic tool for the leading cause of blindness in developed countries: Diabetic Macular Edema (DME). Depending on the type of fluid accumulations, different treatments are needed. In particular, Cystoid Macular Edemas (CMEs) represent the most severe scenario, while Diffuse Retinal Thickening (DRT) is an early indicator of the disease but a challenging scenario to detect. While methodologies exist, their explanatory power is limited to the input sample itself. However, due to the complexity of these accumulations, this may not be enough for a clinician to assess the validity of the classification. Thus, in this work, we propose a novel approach based on multi-prototype networks with vision transformers to obtain an example-based explainable classification. Our proposal achieved robust results in two representative OCT devices, with a mean accuracy of 0.9099 ± 0.0083 and 0.8582 ± 0.0126 for CME and DRT-type fluid accumulations, respectively. © 2023 IEEE.
2023
Autores
Nakayama, LF; Matos, J; Quion, J; Novaes, F; Mitchell, WG; Mwavu, R; Ji Hung, JY; dy Santiago, AP; Phanphruk, W; Cardoso, JS; Celi, LA;
Publicação
CoRR
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.