2021
Authors
Sousa, MQE; Pedrosa, J; Rocha, J; Pereira, SC; Mendonça, AM; Campilho, A;
Publication
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2021, Houston, TX, USA, December 9-12, 2021
Abstract
Chest radiography is one of the most ubiquitous imaging modalities, playing an essential role in screening, diagnosis and disease management. However, chest radiography interpretation is a time-consuming and complex task, requiring the availability of experienced radiologists. As such, automated diagnosis systems for pathology detection have been proposed aiming to reduce the burden on radiologists and reduce variability in image interpretation. While promising results have been obtained, particularly since the advent of deep learning, there are significant limitations in the developed solutions, namely the lack of representative data for less frequent pathologies and the learning of biases from the training data, such as patient position, medical devices and other markers as proxies for certain pathologies. The lack of explainability is also a challenge for the adoption of these solutions in clinical practice.Generative adversarial networks could play a significant role as a solution for these challenges as they allow to artificially create new realistic images. This way, new synthetic chest radiography images could be used to increase the prevalence of less represented pathology classes and decrease model biases as well as improving the explainability of automatic decisions by generating samples that serve as examples or counter-examples to the image being analysed, ensuring patient privacy.In this study, a few-shot generative adversarial network is used to generate synthetic chest radiography images. A minimum Fréchet Inception Distance score of 17.83 was obtained, allowing to generate convincing synthetic images. Perceptual validation was then performed by asking multiple readers to classify a mixed set of synthetic and real images. An average accuracy of 83.5% was obtained but a strong dependency on reader experience level was observed. While synthetic images showed structural irregularities, the overall image sharpness was a major factor in the decision of readers. The synthetic images were then validated using a MobileNet abnormality classifier and it was shown that over 99% of images were classified correctly, indicating that the generated images were correctly interpreted by the classifier. Finally, the use of the synthetic images during training of a YOLOv5 pathology detector showed that the addition of the synthetic images led to an improvement of mean average precision of 0.05 across 14 pathologies.In conclusion, the usage of few-shot generative adversarial networks for chest radiography image generation was shown and tested in multiple scenarios, establishing a baseline for future experiments to increase the applicability of generative models in clinical scenarios of automatic CXR screening and diagnosis tools.
2021
Authors
Wanderley, DS; Ferreira, CA; Campilho, A; Silva, JA;
Publication
CENTERIS 2021 - International Conference on ENTERprise Information Systems / ProjMAN 2021 - International Conference on Project MANagement / HCist 2021 - International Conference on Health and Social Care Information Systems and Technologies 2021, Braga, Portugal
Abstract
The detection of ovarian structures from ultrasound images is an important task in gynecological and reproductive medicine. An automatic detection system of ovarian structures can work as a second opinion for less experienced physicians or complex ultrasound interpretations. This work presents a study of three popular CNN-based object detectors applied to the detection of healthy ovarian structures, namely ovary and follicles, in B-mode ultrasound images. The Faster R-CNN presented the best results, with a precision of 95.5% and a recall of 94.7% for both classes, being able to detect all the ovaries correctly. The RetinaNet showed competitive results, exceeding 90% of precision and recall. Despite being very fast and suitable for real-time applications, YOLOv3 was ineffective in detecting ovaries and had the worst results detecting follicles. We also compare CNN results with classical computer vision methods presented in the ovarian follicle detection literature.
2022
Authors
Pedrosa, J; Sousa, P; Silva, J; Mendonca, AM; Campilho, A;
Publication
PATTERN RECOGNITION AND IMAGE ANALYSIS (IBPRIA 2022)
Abstract
Chest radiography is one of the most common medical imaging modalites. However, chest radiography interpretation is a complex task that requires significant expertise. As such, the development of automatic systems for pathology detection has been proposed in literature, particularly using deep learning. However, these techniques suffer from a lack of explainability, which hinders their adoption in clinical scenarios. One technique commonly used by radiologists to support and explain decisions is to search for cases with similar findings for direct comparison. However, this process is extremely time-consuming and can be prone to confirmation bias. Automatic image retrieval methods have been proposed in literature but typically extract features from the whole image, failing to focus on the lesion in which the radiologist is interested. In order to overcome these issues, a novel framework LXIR for lesion-based image retrieval is proposed in this study, based on a state of the art object detection framework (YOLOv5) for the detection of relevant lesions as well as feature representation of those lesions. It is shown that the proposed method can successfully identify lesions and extract features which accurately describe high-order characteristics of each lesion, allowing to retrieve lesions of the same pathological class. Furthermore, it is show that in comparison to SSIM-based retrieval, a classical perceptual metric, and random retrieval of lesions, the proposed method retrieves the most relevant lesions 81% of times, according to the evaluation of two independent radiologists, in comparison to 42% of times by SSIM.
2022
Authors
Pedrosa, J; Aresta, G; Ferreira, C; Carvalho, C; Silva, J; Sousa, P; Ribeiro, L; Mendonca, AM; Campilho, A;
Publication
SCIENTIFIC REPORTS
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has impacted healthcare systems across the world. Chest radiography (CXR) can be used as a complementary method for diagnosing/following COVID-19 patients. However, experience level and workload of technicians and radiologists may affect the decision process. Recent studies suggest that deep learning can be used to assess CXRs, providing an important second opinion for radiologists and technicians in the decision process, and super-human performance in detection of COVID-19 has been reported in multiple studies. In this study, the clinical applicability of deep learning systems for COVID-19 screening was assessed by testing the performance of deep learning systems for the detection of COVID-19. Specifically, four datasets were used: (1) a collection of multiple public datasets (284.793 CXRs); (2) BIMCV dataset (16.631 CXRs); (3) COVIDGR (852 CXRs) and 4) a private dataset (6.361 CXRs). All datasets were collected retrospectively and consist of only frontal CXR views. A ResNet-18 was trained on each of the datasets for the detection of COVID-19. It is shown that a high dataset bias was present, leading to high performance in intradataset train-test scenarios (area under the curve 0.55-0.84 on the collection of public datasets). Significantly lower performances were obtained in interdataset train-test scenarios however (area under the curve > 0.98). A subset of the data was then assessed by radiologists for comparison to the automatic systems. Finetuning with radiologist annotations significantly increased performance across datasets (area under the curve 0.61-0.88) and improved the attention on clinical findings in positive COVID-19 CXRs. Nevertheless, tests on CXRs from different hospital services indicate that the screening performance of CXR and automatic systems is limited (area under the curve < 0.6 on emergency service CXRs). However, COVID-19 manifestations can be accurately detected when present, motivating the use of these tools for evaluating disease progression on mild to severe COVID-19 patients.
2021
Authors
Pedrosa, J; Aresta, G; Ferreira, C; Mendonca, A; Campilho, A;
Publication
PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON BIOMEDICAL ENGINEERING SYSTEMS AND TECHNOLOGIES (BIOIMAGING), VOL 2
Abstract
Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
2022
Authors
Meiburger, KM; Marzola, F; Zahnd, G; Faita, F; Loizou, CP; Laine, N; Carvalho, C; Steinman, DA; Gibello, L; Bruno, RM; Clarenbach, R; Francesconi, M; Nicolaides, AN; Liebgott, H; Campilho, A; Ghotbi, R; Kyriacou, E; Navab, N; Griffin, M; Panayiotou, AG; Gherardini, R; Varetto, G; Bianchini, E; Pattichis, CS; Ghiadoni, L; Rouco, J; Orkisz, M; Molinari, F;
Publication
COMPUTERS IN BIOLOGY AND MEDICINE
Abstract
After publishing an in-depth study that analyzed the ability of computerized methods to assist or replace human experts in obtaining carotid intima-media thickness (CIMT) measurements leading to correct therapeutic decisions, here the same consortium joined to present technical outlooks on computerized CIMT measurement systems and provide considerations for the community regarding the development and comparison of these methods, including considerations to encourage the standardization of computerized CIMT measurements and results presentation. A multi-center database of 500 images was collected, upon which three manual segmentations and seven computerized methods were employed to measure the CIMT, including traditional methods based on dynamic programming, deformable models, the first order absolute moment, anisotropic Gaussian derivative filters and deep learning-based image processing approaches based on U-Net convolutional neural networks. An inter- and intra-analyst variability analysis was conducted and segmentation results were analyzed by dividing the database based on carotid morphology, image signal-to-noise ratio, and research center. The computerized methods obtained CIMT absolute bias results that were comparable with studies in literature and they generally were similar and often better than the observed inter- and intra-analyst variability. Several computerized methods showed promising segmentation results, including one deep learning method (CIMT absolute bias = 106 +/- 89 mu m vs. 160 +/- 140 mu m intra-analyst variability) and three other traditional image processing methods (CIMT absolute bias = 139 +/- 119 mu m, 143 +/- 118 mu m and 139 +/- 136 mu m). The entire database used has been made publicly available for the community to facilitate future studies and to encourage an open comparison and technical analysis
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.