2023
Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonça, AM; Campilho, A;
Publicação
2023 IEEE 36TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS, CBMS
Abstract
Chest radiography is one of the most ubiquitous medical imaging modalities. Nevertheless, the interpretation of chest radiography images is time-consuming, complex and subject to observer variability. As such, automated diagnosis systems for pathology detection have been proposed, aiming to reduce the burden on radiologists. The advent of deep learning has fostered the development of solutions for both abnormality detection with promising results. However, these tools suffer from poor explainability as the reasons that led to a decision cannot be easily understood, representing a major hurdle for their adoption in clinical practice. In order to overcome this issue, a method for chest radiography abnormality detection is presented which relies on an object detection framework to detect individual findings and thus separate normal and abnormal CXRs. It is shown that this framework is capable of an excellent performance in abnormality detection (AUC: 0.993), outperforming other state-of-the-art classification methodologies (AUC: 0.976 using the same classes). Furthermore, validation on external datasets shows that the proposed framework has a smaller drop in performance when applied to previously unseen data (21.9% vs 23.4% on average). Several approaches for object detection are compared and it is shown that merging pathology classes to minimize radiologist variability improves the localization of abnormal regions (0.529 vs 0.491 APF when using all pathology classes), resulting in a network which is more explainable and thus more suitable for integration in clinical practice.
2023
Autores
Rocha, J; Mendonça, AM; Pereira, SC; Campilho, A;
Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2023, Istanbul, Turkiye, December 5-8, 2023
Abstract
The integration of explanation techniques promotes the comprehension of a model's output and contributes to its interpretation e.g. by generating heat maps highlighting the most decisive regions for that prediction. However, there are several drawbacks to the current heat map-generating methods. Probability by itself is not indicative of the model's conviction in a prediction, as it is influenced by multiple factors, such as class imbalance. Consequently, it is possible that a model yields two true positive predictions - one with an accurate explanation map, and the other with an inaccurate one. Current state-of-the-art explanations are not able to distinguish both scenarios and alert the user to dubious explanations. The goal of this work is to represent these maps more intuitively based on how confident the model is regarding the diagnosis, by adding an extra validation step over the state-of-the-art results that indicates whether the user should trust the initial explanation or not. The proposed method, Confident-CAM, facilitates the interpretation of the results by measuring the distance between the output probability and the corresponding class threshold, using a confidence score to generate nearly null maps when the initial explanations are most likely incorrect. This study implements and validates the proposed algorithm on a multi-label chest X-ray classification exercise, targeting 14 radiological findings in the ChestX-Ray14 dataset with significant class imbalance. Results indicate that confidence scores can distinguish likely accurate and inaccurate explanations. Code available via GitHub. © 2023 IEEE.
2024
Autores
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonça, AM;
Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE
Abstract
Chest X-ray scans are frequently requested to detect the presence of abnormalities, due to their low-cost and non-invasive nature. The interpretation of these images can be automated to prioritize more urgent exams through deep learning models, but the presence of image artifacts, e.g. lettering, often generates a harmful bias in the classifiers and an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise, in which an image is either normal or abnormal, using an attention-driven and spatially unsupervised Spatial Transformer Network (STERN), that takes advantage of a novel domain-specific loss to better frame the region of interest. Unlike the state of the art, in which this type of networks is usually employed for image alignment, this work proposes a spatial transformer module that is used specifically for attention, as an alternative to the standard object detection models that typically precede the classifier to crop out the region of interest. In sum, the proposed end-to-end architecture dynamically scales and aligns the input images to maximize the classifier's performance, by selecting the thorax with translation and non-isotropic scaling transformations, and thus eliminating artifacts. Additionally, this paper provides an extensive and objective analysis of the selected regions of interest, by proposing a set of mathematical evaluation metrics. The results indicate that the STERN achieves similar results to using YOLO-cropped images, with reduced computational cost and without the need for localization labels. More specifically, the system is able to distinguish abnormal frontal images from the CheXpert dataset, with a mean AUC of 85.67% -a 2.55% improvement vs. the 0.98% improvement achieved by the YOLO-based counterpart in comparison to a standard baseline classifier. At the same time, the STERN approach requires less than 2/3 of the training parameters, while increasing the inference time per batch in less than 2 ms. Code available via GitHub.
2024
Autores
Pereira, SC; Mendonca, AM; Campilho, A; Sousa, P; Lopes, CT;
Publicação
ARTIFICIAL INTELLIGENCE IN MEDICINE
Abstract
Machine Learning models need large amounts of annotated data for training. In the field of medical imaging, labeled data is especially difficult to obtain because the annotations have to be performed by qualified physicians. Natural Language Processing (NLP) tools can be applied to radiology reports to extract labels for medical images automatically. Compared to manual labeling, this approach requires smaller annotation efforts and can therefore facilitate the creation of labeled medical image data sets. In this article, we summarize the literature on this topic spanning from 2013 to 2023, starting with a meta-analysis of the included articles, followed by a qualitative and quantitative systematization of the results. Overall, we found four types of studies on the extraction of labels from radiology reports: those describing systems based on symbolic NLP, statistical NLP, neural NLP, and those describing systems combining or comparing two or more of the latter. Despite the large variety of existing approaches, there is still room for further improvement. This work can contribute to the development of new techniques or the improvement of existing ones.
2024
Autores
Miranda, M; Santos-Oliveira, J; Mendonca, AM; Sousa, V; Melo, T; Carneiro, A;
Publicação
DIAGNOSTICS
Abstract
Artificial intelligence (AI) models have received considerable attention in recent years for their ability to identify optical coherence tomography (OCT) biomarkers with clinical diagnostic potential and predict disease progression. This study aims to externally validate a deep learning (DL) algorithm by comparing its segmentation of retinal layers and fluid with a gold-standard method for manually adjusting the automatic segmentation of the Heidelberg Spectralis HRA + OCT software Version 6.16.8.0. A total of sixty OCT images of healthy subjects and patients with intermediate and exudative age-related macular degeneration (AMD) were included. A quantitative analysis of the retinal thickness and fluid area was performed, and the discrepancy between these methods was investigated. The results showed a moderate-to-strong correlation between the metrics extracted by both software types, in all the groups, and an overall near-perfect area overlap was observed, except for in the inner segment ellipsoid (ISE) layer. The DL system detected a significant difference in the outer retinal thickness across disease stages and accurately identified fluid in exudative cases. In more diseased eyes, there was significantly more disagreement between these methods. This DL system appears to be a reliable method for accessing important OCT biomarkers in AMD. However, further accuracy testing should be conducted to confirm its validity in real-world settings to ultimately aid ophthalmologists in OCT imaging management and guide timely treatment approaches.
2024
Autores
Pereira, SC; Rocha, J; Campilho, A; Mendonça, AM;
Publicação
HELIYON
Abstract
Although the classification of chest radiographs has long been an extensively researched topic, interest increased significantly with the onset of the COVID-19 pandemic. Existing results are promising; however, the radiological similarities between COVID-19 and other types of respiratory diseases limit the success of conventional image classification approaches that focus on single instances. This study proposes a novel perspective that conceptualizes COVID-19 pneumonia as a deviation from a normative distribution of typical pneumonia patterns. Using a population- based approach, our approach utilizes distributional anomaly detection. This method diverges from traditional instance-wise approaches by focusing on sets of scans instead of individual images. Using an autoencoder to extract feature representations, we present instance-based and distribution-based assessments of the separability between COVID-positive and COVIDnegative pneumonia radiographs. The results demonstrate that the proposed distribution-based methodology outperforms conventional instance-based techniques in identifying radiographic changes associated with COVID-positive cases. This underscores its potential as an early warning system capable of detecting significant distributional shifts in radiographic data. By continuously monitoring these changes, this approach offers a mechanism for early identification of emerging health trends, potentially signaling the onset of new pandemics and enabling prompt public health responses.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.