2022
Authors
Rocha, J; Pereira, SC; Pedrosa, J; Campilho, A; Mendonca, AM;
Publication
2022 IEEE 35TH INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)
Abstract
Backed by more powerful computational resources and optimized training routines, deep learning models have attained unprecedented performance in extracting information from chest X-ray data. Preceding other tasks, an automated abnormality detection stage can be useful to prioritize certain exams and enable a more efficient clinical workflow. However, the presence of image artifacts such as lettering often generates a harmful bias in the classifier, leading to an increase of false positive results. Consequently, healthcare would benefit from a system that selects the thoracic region of interest prior to deciding whether an image is possibly pathologic. The current work tackles this binary classification exercise using an attention-driven and spatially unsupervised Spatial Transformer Network (STN). The results indicate that the STN achieves similar results to using YOLO-cropped images, with fewer computational expenses and without the need for localization labels. More specifically, the system is able to distinguish between normal and abnormal CheXpert images with a mean AUC of 84.22%.
2022
Authors
Zhao, D; Ferdian, E; Maso Talou, GD; Gilbert, K; Quill, GM; Wang, VY; Pedrosa, J; D'hooge, J; Sutton, T; Lowe, BS; Legget, ME; Ruygrok, PN; Doughty, RN; Young, AA; Nash, MP;
Publication
European Heart Journal - Cardiovascular Imaging
Abstract
2022
Authors
Maximino, J; Coimbra, MT; Pedrosa, J;
Publication
44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2022, Glasgow, Scotland, United Kingdom, July 11-15, 2022
Abstract
The coronavirus disease 2019 (COVID-19) evolved into a global pandemic, responsible for a significant number of infections and deaths. In this scenario, point-of-care ultrasound (POCUS) has emerged as a viable and safe imaging modality. Computer vision (CV) solutions have been proposed to aid clinicians in POCUS image interpretation, namely detection/segmentation of structures and image/patient classification but relevant challenges still remain. As such, the aim of this study is to develop CV algorithms, using Deep Learning techniques, to create tools that can aid doctors in the diagnosis of viral and bacterial pneumonia (VP and BP) through POCUS exams. To do so, convolutional neural networks were designed to perform in classification tasks. The architectures chosen to build these models were the VGG16, ResNet50, DenseNet169 e MobileNetV2. Patients images were divided in three classes: healthy (HE), BP and VP (which includes COVID-19). Through a comparative study, which was based on several performance metrics, the model based on the DenseNet169 architecture was designated as the best performing model, achieving 78% average accuracy value of the five iterations of 5- Fold Cross-Validation. Given that the currently available POCUS datasets for COVID-19 are still limited, the training of the models was negatively affected by such and the models were not tested in an independent dataset. Furthermore, it was also not possible to perform lesion detection tasks. Nonetheless, in order to provide explainability and understanding of the models, Gradient-weighted Class Activation Mapping (GradCAM) were used as a tool to highlight the most relevant classification regions. Clinical relevance - Reveals the potential of POCUS to support COVID-19 screening. The results are very promising although the dataset is limite
2022
Authors
Ali, Y; Beheshti, S; Janabi Sharifi, F; Rezaii, TY; Cheema, AN; Pedrosa, J;
Publication
SIGNAL IMAGE AND VIDEO PROCESSING
Abstract
Echocardiography-based cardiac boundary tracking provides valuable information about the heart condition for interventional procedures and intensive care applications. Nevertheless, echocardiographic images come with several issues, making it a challenging task to develop a tracking and segmentation algorithm that is robust to shadows, occlusions, and heart rate changes. We propose an autonomous tracking method to improve the robustness and efficiency of echocardiographic tracking. A method denoted by hybrid Condensation and adaptive Kalman filter (HCAKF) is proposed to overcome tracking challenges of echocardiograms, such as variable heart rate and sensitivity to the initialization stage. The tracking process is initiated by utilizing active shape model, which provides the tracking methods with a number of tracking features. The procedure tracks the endocardium borders, and it is able to adapt to changes in the cardiac boundaries velocity and visibility. HCAKF enables one to use a much smaller number of samples that is used in Condensation without sacrificing tracking accuracy. Furthermore, despite combining the two methods, our complexity analysis shows that HCAKF can produce results in real-time. The obtained results demonstrate the robustness of the proposed method to the changes in the heart rate, yielding an Hausdorff distance of 1.032 +/- 0.375 while providing adequate efficiency for real-time operations.
2022
Authors
Baeza, R; Santos, C; Nunes, F; Mancio, J; Carvalho, RF; Coimbra, MT; Renna, F; Pedrosa, J;
Publication
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings
Abstract
The pericardium is a thin membrane sac that covers the heart. As such, the segmentation of the pericardium in computed tomography (CT) can have several clinical applications, namely as a preprocessing step for extraction of different clinical parameters. However, manual segmentation of the pericardium can be challenging, time-consuming and subject to observer variability, which has motivated the development of automatic pericardial segmentation methods. In this study, a method to automatically segment the pericardium in CT using a U-Net framework is proposed. Two datasets were used in this study: the publicly available Cardiac Fat dataset and a private dataset acquired at the hospital centre of Vila Nova de Gaia e Espinho (CHVNGE). The Cardiac Fat database was used for training with two different input sizes - 512 512 and 256 256. A superior performance was obtained with the 256 256 image size, with a mean Dice similarity score (DCS) of 0.871 ± 0.01 and 0.807 ± 0.06 on the Cardiac Fat test set and the CHVNGE dataset, respectively. Results show that reasonable performance can be achieved with a small number of patients for training and an off-the-shelf framework, with only a small decrease in performance in an external dataset. Nevertheless, additional data will increase the robustness of this approach for difficult cases and future approaches must focus on the integration of 3D information for a more accurate segmentation of the lower pericardium. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
2023
Authors
Ferraz, S; Coimbra, M; Pedrosa, J;
Publication
FRONTIERS IN CARDIOVASCULAR MEDICINE
Abstract
Echocardiography is the most frequently used imaging modality in cardiology. However, its acquisition is affected by inter-observer variability and largely dependent on the operator's experience. In this context, artificial intelligence techniques could reduce these variabilities and provide a user independent system. In recent years, machine learning (ML) algorithms have been used in echocardiography to automate echocardiographic acquisition. This review focuses on the state-of-the-art studies that use ML to automate tasks regarding the acquisition of echocardiograms, including quality assessment (QA), recognition of cardiac views and assisted probe guidance during the scanning process. The results indicate that performance of automated acquisition was overall good, but most studies lack variability in their datasets. From our comprehensive review, we believe automated acquisition has the potential not only to improve accuracy of diagnosis, but also help novice operators build expertise and facilitate point of care healthcare in medically underserved areas.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.