Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2024

ON THE SUITABILITY OF B-COS NETWORKS FOR THE MEDICAL DOMAIN

Authors
Rio-Torto, I; Gonçalves, T; Cardoso, JS; Teixeira, LF;

Publication
IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING, ISBI 2024

Abstract
In fields that rely on high-stakes decisions, such as medicine, interpretability plays a key role in promoting trust and facilitating the adoption of deep learning models by the clinical communities. In the medical image analysis domain, gradient-based class activation maps are the most widely used explanation methods and the field lacks a more in depth investigation into inherently interpretable models that focus on integrating knowledge that ensures the model is learning the correct rules. A new approach, B-cos networks, for increasing the interpretability of deep neural networks by inducing weight-input alignment during training showed promising results on natural image classification. In this work, we study the suitability of these B-cos networks to the medical domain by testing them on different use cases (skin lesions, diabetic retinopathy, cervical cytology, and chest X-rays) and conducting a thorough evaluation of several explanation quality assessment metrics. We find that, just like in natural image classification, B-cos explanations yield more localised maps, but it is not clear that they are better than other methods' explanations when considering more explanation properties.

2024

Weather and Meteorological Optical Range Classification for Autonomous Driving

Authors
Pereira, C; Cruz, RPM; Fernandes, JND; Pinto, JR; Cardoso, JS;

Publication
IEEE Transactions on Intelligent Vehicles

Abstract

2024

MST-KD: Multiple Specialized Teachers Knowledge Distillation for Fair Face Recognition

Authors
Caldeira, E; Cardoso, JS; Sequeira, AF; Neto, PC;

Publication
CoRR

Abstract

2024

Evaluating the Impact of Pulse Oximetry Bias in Machine Learning Under Counterfactual Thinking

Authors
Martins, I; Matos, J; Gonçalves, T; Celi, LA; Ian Wong, AK; Cardoso, JS;

Publication
Applications of Medical Artificial Intelligence - Third International Workshop, AMAI 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 6, 2024, Proceedings

Abstract
Algorithmic bias in healthcare mirrors existing data biases. However, the factors driving unfairness are not always known. Medical devices capture significant amounts of data but are prone to errors; for instance, pulse oximeters overestimate the arterial oxygen saturation of darker-skinned individuals, leading to worse outcomes. The impact of this bias in machine learning (ML) models remains unclear. This study addresses the technical challenges of quantifying the impact of medical device bias in downstream ML. Our experiments compare a “perfect world”, without pulse oximetry bias, using SaO2 (blood-gas), to the “actual world”, with biased measurements, using SpO2 (pulse oximetry). Under this counterfactual design, two models are trained with identical data, features, and settings, except for the method of measuring oxygen saturation: models using SaO2 are a “control” and models using SpO2 a “treatment”. The blood-gas oximetry linked dataset was a suitable test-bed, containing 163,396 nearly-simultaneous SpO2 - SaO2 paired measurements, aligned with a wide array of clinical features and outcomes. We studied three classification tasks: in-hospital mortality, respiratory SOFA score in the next 24 h, and SOFA score increase by two points. Models using SaO2 instead of SpO2 generally showed better performance. Patients with overestimation of O2 by pulse oximetry of = 3% had significant decreases in mortality prediction recall, from 0.63 to 0.59, P < 0.001. This mirrors clinical processes where biased pulse oximetry readings provide clinicians with false reassurance of patients’ oxygen levels. A similar degradation happened in ML models, with pulse oximetry biases leading to more false negatives in predicting adverse outcomes. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2024

Learning Ordinality in Semantic Segmentation

Authors
Cristino, R; Cruz, RPM; Cardoso, JS;

Publication
CoRR

Abstract

2024

Deep Learning-based Prediction of Breast Cancer Tumor and Immune Phenotypes from Histopathology

Authors
Gonçalves, T; Arias, DP; Willett, J; Hoebel, KV; Cleveland, MC; Ahmed, SR; Gerstner, ER; Cramer, JK; Cardoso, JS; Bridge, CP; Kim, AE;

Publication
CoRR

Abstract

  • 11
  • 317