Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Ana Maria Mendonça

2023

Lesion-Aware Chest Radiography Abnormality Classification with Object Detection Framework

Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonça, AM; Campilho, A;

Publicação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS

Abstract
Chest radiography is one of the most ubiquitous medical imaging modalities. Nevertheless, the interpretation of chest radiography images is time-consuming, complex and subject to observer variability. As such, automated diagnosis systems for pathology detection have been proposed, aiming to reduce the burden on radiologists. The advent of deep learning has fostered the development of solutions for both abnormality detection with promising results. However, these tools suffer from poor explainability as the reasons that led to a decision cannot be easily understood, representing a major hurdle for their adoption in clinical practice. In order to overcome this issue, a method for chest radiography abnormality detection is presented which relies on an object detection framework to detect individual findings and thus separate normal and abnormal CXRs. It is shown that this framework is capable of an excellent performance in abnormality detection (AUC: 0.993), outperforming other state-of-the-art classification methodologies (AUC: 0.976 using the same classes). Furthermore, validation on external datasets shows that the proposed framework has a smaller drop in performance when applied to previously unseen data (21.9% vs 23.4% on average). Several approaches for object detection are compared and it is shown that merging pathology classes to minimize radiologist variability improves the localization of abnormal regions (0.529 vs 0.491 APF when using all pathology classes), resulting in a network which is more explainable and thus more suitable for integration in clinical practice.

2023

Confident-CAM: Improving Heat Map Interpretation in Chest X-Ray Image Classification

Autores
Rocha, J; Mendonça, AM; Pereira, SC; Campilho, A;

Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023

Abstract
The integration of explanation techniques promotes the comprehension of a model's output and contributes to its interpretation e.g. by generating heat maps highlighting the most decisive regions for that prediction. However, there are several drawbacks to the current heat map-generating methods. Probability by itself is not indicative of the model's conviction in a prediction, as it is influenced by multiple factors, such as class imbalance. Consequently, it is possible that a model yields two true positive predictions - one with an accurate explanation map, and the other with an inaccurate one. Current state-of-the-art explanations are not able to distinguish both scenarios and alert the user to dubious explanations. The goal of this work is to represent these maps more intuitively based on how confident the model is regarding the diagnosis, by adding an extra validation step over the state-of-the-art results that indicates whether the user should trust the initial explanation or not. The proposed method, Confident-CAM, facilitates the interpretation of the results by measuring the distance between the output probability and the corresponding class threshold, using a confidence score to generate nearly null maps when the initial explanations are most likely incorrect. This study implements and validates the proposed algorithm on a multi-label chest X-ray classification exercise, targeting 14 radiological findings in the ChestX-Ray14 dataset with significant class imbalance. Results indicate that confidence scores can distinguish likely accurate and inaccurate explanations. Code available via GitHub. © 2023 IEEE.

2024

STERN: Attention-driven Spatial Transformer Network for abnormality detection in chest X-ray images

Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;

Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.

2024

Automated image label extraction from radiology reports - A review

Autores
Pereira, SC; Mendonca, AM; Campilho, A; Sousa, P; Lopes, CT;

Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE

Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.

  • 19
  • 19