Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Ana Maria Mendonça

2024

Distribution-based detection of radiographic changes in pneumonia patterns: A COVID-19 case study

Autores
Pereira, SC; Rocha, J; Campilho, A; Mendonça, AM;

Publicação
HELIYON

Abstract
Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising; however, the radiological similarities between COVID-19 and other types of respiratory diseases limit the success of conventional image classification approaches that focus on single instances. This study proposes a novel perspective that conceptualizes COVID-19 pneumonia as a deviation from a normative distribution of typical pneumonia patterns. Using a population- based approach, our approach utilizes distributional anomaly detection. This method diverges from traditional instance-wise approaches by focusing on sets of scans instead of individual images. Using an autoencoder to extract feature representations, we present instance-based and distribution-based assessments of the separability between COVID-positive and COVIDnegative pneumonia radiographs. The results demonstrate that the proposed distribution-based methodology outperforms conventional instance-based techniques in identifying radiographic changes associated with COVID-positive cases. This underscores its potential as an early warning system capable of detecting significant distributional shifts in radiographic data. By continuously monitoring these changes, this approach offers a mechanism for early identification of emerging health trends, potentially signaling the onset of new pandemics and enabling prompt public health responses.

2023

Addressing Chest Radiograph Projection Bias in Deep Classification Models

Autores
Pereira, SC; Rochal, J; Gaudio, A; Smailagic, A; Campilhol, A; Mendonca, AM;

Publicação
MEDICAL IMAGING WITH DEEP LEARNING, VOL 227

Abstract
Deep learning-based models are widely used for disease classification in chest radiographs. This exam can be performed in one of two projections (posteroanterior or anteroposterior), depending on the direction that the X-ray beam travels through the body. Since projection visibly affects the way anatomical structures appear in the scans, it may introduce bias in classifiers, especially when spurious correlations between a given disease and a projection occur. This paper examines the influence of chest radiograph projection on the performance of deep learning-based classification models and proposes an approach to mitigate projection-induced bias. Results show that a DenseNet-121 model is better at classifying images from the most representative projection in the data set, suggesting that projection is taken into account by the classifier. Moreover, this model can classify chest X-ray projection better than any of the fourteen radiological findings considered, without being explicitly trained for that task, putting it at high risk for projection bias. We propose a label-conditional gradient reversal framework to make the model insensitive to projection, by forcing the extracted features to be simultaneously good for disease classification and bad for projection classification, resulting in a framework with reduced projection-induced bias.

2024

DeepClean - Contrastive Learning Towards Quality Assessment in Large-Scale CXR Data Sets

Autores
Pereira, SC; Pedrosa, J; Rocha, J; Sousa, P; Campilho, A; Mendonça, AM;

Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024

Abstract
Large-scale datasets are essential for training deep learning models in medical imaging. However, many of these datasets contain poor-quality images that can compromise model performance and clinical reliability. In this study, we propose a framework to detect non-compliant images, such as corrupted scans, incomplete thorax X-rays, and images of non-thoracic body parts, by leveraging contrastive learning for feature extraction and parametric or non-parametric scoring methods for out-of-distribution ranking. Our approach was developed and tested on the CheXpert dataset, achieving an AUC of 0.75 in a manually labeled subset of 1,000 images, and further qualitatively and visually validated on the external PadChest dataset, where it also performed effectively. Our results demonstrate the potential of contrastive learning to detect non-compliant images in large-scale medical datasets, laying the foundation for future work on reducing dataset pollution and improving the robustness of deep learning models in clinical practice. © 2024 IEEE.

2024

Evaluating Visual Explainability in Chest X-Ray Pathology Detection

Autores
Pereira, P; Rocha, J; Pedrosa, J; Mendonça, AM;

Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
Chest X-Ray (CXR), plays a vital role in diagnosing lung and heart conditions, but the high demand for CXR examinations poses challenges for radiologists. Automatic support systems can ease this burden by assisting radiologists in the image analysis process. While Deep Learning models have shown promise in this task, concerns persist regarding their complexity and decision-making opacity. To address this, various visual explanation techniques have been developed to elucidate the model reasoning, some of which have received significant attention in literature and are widely used such as GradCAM. However, it is unclear how different explanations methods perform and how to quantitatively measure their performance, as well as how that performance may be dependent on the model architecture used and the dataset characteristics. In this work, two widely used deep classification networks - DenseNet121 and ResNet50 - are trained for multi-pathology classification on CXR and visual explanations are then generated using GradCAM, GradCAM++, EigenGrad-CAM, Saliency maps, LRP and DeepLift. These explanations methods are then compared with radiologist annotations using previously proposed explainability evaluations metrics - intersection over union and hit rate. Furthermore, a novel method to convey visual explanations in the form of radiological written reports is proposed, allowing for a clinically-oriented explainability evaluation metric - zones score. It is shown that Grad-CAM++ and Saliency methods offer the most accurate explanations and that the effectiveness of visual explanations is found to vary based on the model and corresponding input size. Additionally, the explainability performance across different CXR datasets is evaluated, highlighting that the explanation quality depends on the dataset's characteristics and annotations.

2024

Anatomically-Guided Inpainting for Local Synthesis of Normal Chest Radiographs

Autores
Pedrosa, J; Pereira, SC; Silva, J; Mendonça, AM; Campilho, A;

Publicação
Deep Generative Models - 4th MICCAI Workshop, DGM4MICCAI 2024, Held in Conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, Proceedings

Abstract
Chest radiography (CXR) is one of the most used medical imaging modalities. Nevertheless, the interpretation of CXR images is time-consuming and subject to variability. As such, automated systems for pathology detection have been proposed and promising results have been obtained, particularly using deep learning. However, these tools suffer from poor explainability, which represents a major hurdle for their adoption in clinical practice. One proposed explainability method in CXR is through contrastive examples, i.e. by showing an alternative version of the CXR except without the lesion being investigated. While image-level normal/healthy image synthesis has been explored in literature, normal patch synthesis via inpainting has received little attention. In this work, a method to synthesize contrastive examples in CXR based on local synthesis of normal CXR patches is proposed. Based on a contextual attention inpainting network (CAttNet), an anatomically-guided inpainting network (AnaCAttNet) is proposed that leverages anatomical information of the original CXR through segmentation to guide the inpainting for a more realistic reconstruction. A quantitative evaluation of the inpainting is performed, showing that AnaCAttNet outperforms CAttNet (FID of 0.0125 and 0.0132 respectively). Qualitative evaluation by three readers also showed that AnaCAttNet delivers superior reconstruction quality and anatomical realism. In conclusion, the proposed anatomical segmentation module for inpainting is shown to improve inpainting performance. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2023

Addressing Chest Radiograph Projection Bias in Deep Classification Models

Autores
Pereira, C; Rocha, J; Gaudio, A; Smailagic, A; Campilho, A; Mendonça, AM;

Publicação
Proceedings of Machine Learning Research

Abstract
Deep learning-based models are widely used for disease classification in chest radiographs. This exam can be performed in one of two projections (posteroanterior or anteroposterior), depending on the direction that the X-ray beam travels through the body. Since projection visibly affects the way anatomical structures appear in the scans, it may introduce bias in classifiers, especially when spurious correlations between a given disease and a projection occur. This paper examines the influence of chest radiograph projection on the performance of deep learning-based classification models and proposes an approach to mitigate projection-induced bias. Results show that a DenseNet-121 model is better at classifying images from the most representative projection in the data set, suggesting that projection is taken into account by the classifier. Moreover, this model can classify chest X-ray projection better than any of the fourteen radiological findings considered, without being explicitly trained for that task, putting it at high risk for projection bias. We propose a label-conditional gradient reversal framework to make the model insensitive to projection, by forcing the extracted features to be simultaneously good for disease classification and bad for projection classification, resulting in a framework with reduced projection-induced bias. © 2023 CC-BY 4.0, S.C. Pereira, J. Rocha, A. Gaudio, A. Smailagic, A. Campilho & A.M. Mendonça.

  • 20
  • 21