2021
Autores
Sousa, MQE; Pedrosa, J; Rocha, J; Pereira, SC; Mendonça, AM; Campilho, A;
Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021, Houston, TX, USA, December 9-12, 2021
Abstract
Chest radiography is one of the most ubiquitous imaging modalities, playing an essential role in screening, diagnosis and disease management. However, chest radiography interpretation is a time-consuming and complex task, requiring the availability of experienced radiologists. As such, automated diagnosis systems for pathology detection have been proposed aiming to reduce the burden on radiologists and reduce variability in image interpretation. While promising results have been obtained, particularly since the advent of deep learning, there are significant limitations in the developed solutions, namely the lack of representative data for less frequent pathologies and the learning of biases from the training data, such as patient position, medical devices and other markers as proxies for certain pathologies. The lack of explainability is also a challenge for the adoption of these solutions in clinical practice.Generative adversarial networks could play a significant role as a solution for these challenges as they allow to artificially create new realistic images. This way, new synthetic chest radiography images could be used to increase the prevalence of less represented pathology classes and decrease model biases as well as improving the explainability of automatic decisions by generating samples that serve as examples or counter-examples to the image being analysed, ensuring patient privacy.In this study, a few-shot generative adversarial network is used to generate synthetic chest radiography images. A minimum Fréchet Inception Distance score of 17.83 was obtained, allowing to generate convincing synthetic images. Perceptual validation was then performed by asking multiple readers to classify a mixed set of synthetic and real images. An average accuracy of 83.5% was obtained but a strong dependency on reader experience level was observed. While synthetic images showed structural irregularities, the overall image sharpness was a major factor in the decision of readers. The synthetic images were then validated using a MobileNet abnormality classifier and it was shown that over 99% of images were classified correctly, indicating that the generated images were correctly interpreted by the classifier. Finally, the use of the synthetic images during training of a YOLOv5 pathology detector showed that the addition of the synthetic images led to an improvement of mean average precision of 0.05 across 14 pathologies.In conclusion, the usage of few-shot generative adversarial networks for chest radiography image generation was shown and tested in multiple scenarios, establishing a baseline for future experiments to increase the applicability of generative models in clinical scenarios of automatic CXR screening and diagnosis tools.
2021
Autores
Carvalho, R; Pedrosa, J; Nedelcu, T;
Publicação
ADVANCES IN VISUAL COMPUTING (ISVC 2021), PT I
Abstract
Skin cancer is one of the most common types of cancer and, with its increasing incidence, accurate early diagnosis is crucial to improve prognosis of patients. In the process of visual inspection, dermatologists follow specific dermoscopic algorithms and identify important features to provide a diagnosis. This process can be automated as such characteristics can be extracted by computer vision techniques. Although deep neural networks can extract useful features from digital images for skin lesion classification, performance can be improved by providing additional information. The extracted pseudo-features can be used as input (multimodal) or output (multi-tasking) to train a robust deep learning model. This work investigates the multimodal and multi-tasking techniques for more efficient training, given the single optimization of several related tasks in the latter, and generation of better diagnosis predictions. Additionally, the role of lesion segmentation is also studied. Results show that multi-tasking improves learning of beneficial features which lead to better predictions, and pseudo-features inspired by the ABCD rule provide readily available helpful information about the skin lesion.
2022
Autores
Pedrosa, J; Aresta, G; Ferreira, C;
Publicação
Detection Systems in Lung Cancer and Imaging, Volume 1
Abstract
2022
Autores
Pedrosa, J; Sousa, P; Silva, J; Mendonca, AM; Campilho, A;
Publicação
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)
Abstract
Chest radiography is one of the most common medical imaging modalites. However, chest radiography interpretation is a complex task that requires significant expertise. As such, the development of automatic systems for pathology detection has been proposed in literature, particularly using deep learning. However, these techniques suffer from a lack of explainability, which hinders their adoption in clinical scenarios. One technique commonly used by radiologists to support and explain decisions is to search for cases with similar findings for direct comparison. However, this process is extremely time-consuming and can be prone to confirmation bias. Automatic image retrieval methods have been proposed in literature but typically extract features from the whole image, failing to focus on the lesion in which the radiologist is interested. In order to overcome these issues, a novel framework LXIR for lesion-based image retrieval is proposed in this study, based on a state of the art object detection framework (YOLOv5) for the detection of relevant lesions as well as feature representation of those lesions. It is shown that the proposed method can successfully identify lesions and extract features which accurately describe high-order characteristics of each lesion, allowing to retrieve lesions of the same pathological class. Furthermore, it is show that in comparison to SSIM-based retrieval, a classical perceptual metric, and random retrieval of lesions, the proposed method retrieves the most relevant lesions 81% of times, according to the evaluation of two independent radiologists, in comparison to 42% of times by SSIM.
2022
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;
Publicação
SCIENTIFIC REPORTS
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.
2021
Autores
Pedrosa, J; Aresta, G; Ferreira, C; Mendonca, A; Campilho, A;
Publicação
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOIMAGING), VOL 2
Abstract
Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.