2024
Autores
Magalhães, B; Pedrosa, J; Renna, F; Paredes, H; Filipe, V;
Publicação
IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2024, Lisbon, Portugal, December 3-6, 2024
Abstract
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, underscoring the need for accurate and reliable diagnostic tools. While AI-driven models have shown significant promise in identifying CAD through imaging techniques, their 'black box' nature often hinders clinical adoption due to a lack of interpretability. In response, this paper proposes a novel approach to image captioning specifically tailored for CAD diagnosis, aimed at enhancing the transparency and usability of AI systems. Utilizing the COCA dataset, which comprises gated coronary CT images along with Ground Truth (GT) segmentation annotations, we introduce a hybrid model architecture that combines a Vision Transformer (ViT) for feature extraction with a Generative Pretrained Transformer (GPT) for generating clinically relevant textual descriptions. This work builds on a previously developed 3D Convolutional Neural Network (CNN) for coronary artery segmentation, leveraging its accurate delineations of calcified regions as critical inputs to the captioning process. By incorporating these segmentation outputs, our approach not only focuses on accurately identifying and describing calcified regions within the coronary arteries but also ensures that the generated captions are clinically meaningful and reflective of key diagnostic features such as location, severity, and artery involvement. This methodology provides medical practitioners with clear, context-rich explanations of AI-generated findings, thereby bridging the gap between advanced AI technologies and practical clinical applications. Furthermore, our work underscores the critical role of Explainable AI (XAI) in fostering trust, improving decision-making, and enhancing the efficacy of AI-driven diagnostics, paving the way for future advancements in the field. © 2024 IEEE.
2024
Autores
Ferraz, S; Coimbra, MT; Pedrosa, J;
Publicação
46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2024, Orlando, FL, USA, July 15-19, 2024
Abstract
Motion estimation in echocardiography is critical when assessing heart function and calculating myocardial deformation indices. Nevertheless, there are limitations in clinical practice, particularly with regard to the accuracy and reliability of measurements retrieved from images. In this study, deep learning-based motion estimation architectures were used to determine the left ventricular longitudinal strain in echocardiography. Three motion estimation approaches, pretrained on popular optical flow datasets, were applied to a simulated echocardiographic dataset. Results show that PWC-Net, RAFT and FlowFormer achieved an average end point error of 0.20, 0.11 and 0.09 mm per frame, respectively. Additionally, global longitudinal strain was calculated from the FlowFormer outputs to assess strain correlation. Notably, there is variability in strain accuracy among different vendors. Thus, optical flow-based motion estimation has the potential to facilitate the use of strain imaging in clinical practice.
2024
Autores
Akbari, S; Tabassian, M; Pedrosa, J; Queirós, S; Papangelopoulou, K; D'hooge, J;
Publicação
IEEE TRANSACTIONS ON ULTRASONICS FERROELECTRICS AND FREQUENCY CONTROL
Abstract
Left ventricle (LV) segmentation of 2-D echocardiography images is an essential step in the analysis of cardiac morphology and function and-more generally-diagnosis of cardiovascular diseases (CVD). Several deep learning (DL) algorithms have recently been proposed for the automatic segmentation of the LV, showing significant performance improvement over the traditional segmentation algorithms. However, unlike the traditional methods, prior information about the segmentation problem, e.g., anatomical shape information, is not usually incorporated for training the DL algorithms. This can degrade the generalization performance of the DL models on unseen images if their characteristics are somewhat different from those of the training images, e.g., low-quality testing images. In this study, a new shape-constrained deep convolutional neural network (CNN)-called B-spline explicit active surface (BEAS)-Net-is introduced for automatic LV segmentation. The BEAS-Net learns how to associate the image features, encoded by its convolutional layers, with anatomical shape-prior information derived by the BEAS algorithm to generate physiologically meaningful segmentation contours when dealing with artifactual or low-quality images. The performance of the proposed network was evaluated using three different in vivo datasets and was compared with a deep segmentation algorithm based on the U-Net model. Both the networks yielded comparable results when tested on images of acceptable quality, but the BEAS-Net outperformed the benchmark DL model on artifactual and low-quality images.
2024
Autores
Mancio, J; Lopes, A; Sousa, I; Nunes, F; Xara, S; Carvalho, M; Ferreira, W; Ferreira, N; Barros, A; Fontes-Carvalho, R; Ribeiro, VG; Bettencourt, N; Pedrosa, J;
Publicação
Abstract
Background Subcutaneous (SAF) and visceral (VAF) abdominal fat have specific properties which the global body fat and total abdominal fat (TAF) size metrics do not capture. Beyond size, radiomics allows deep tissue phenotyping and may capture fat dysfunction. We aimed to characterize the computed tomography (CT) radiomics of SAF and VAF and assess their incremental value above fat size to detect coronary calcification. Methods SAF, VAF and TAF area, signal distribution and texture were extracted from non-contrast CT of 1001 subjects (57% male, 57?±?10 years) with no established cardiovascular disease who underwent CT for coronary calcium score (CCS) with additional abdominal slice (L4/5-S1). XGBoost machine learning models (ML) were used to identify the best features that discriminate SAF from VAF and to train/test ML to detect any coronary calcification (CCS?>?0). Results SAF and VAF appearance in non-contrast CT differs: SAF displays brighter and finer texture than VAF. Compared with CCS?=?0, SAF of CCS?>?0 has higher signal and homogeneous texture, while VAF of CCS?>?0 has lower signal and heterogeneous texture. SAF signal/texture improved SAF area performance to detect CCS?>?0. A ML including SAF and VAF area performed better than TAF area to discriminate CCS?>?0 from CCS?=?0, however, a combined ML of the best SAF and VAF features detected CCS?>?0 as the best TAF features. Conclusion In non-contrast CT, SAF and VAF appearance differs and SAF radiomics improves the detection of CCS?>?0 when added to fat area; TAF radiomics (but not TAF area) spares the need for separate SAF and VAF segmentations.
2024
Autores
de C Araújo, A; Silva, C; Pedrosa, M; Silva, FS; Diniz, OB;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
One of the indicators of possible occurrences of cardiovascular diseases is the amount of coronary artery calcium. Recently, approaches using new technologies such as deep learning have been used to help identify these indicators. This work proposes a segmentation method for calcification of the coronary arteries that has three steps: (1) extraction of the ROI using U-Net with batch normalization after convolution layers, (2) segmentation of the calcifications and (3) removal of false positives using Modified U-Net with EfficientNet. The method uses histogram matching as preprocessing in order to increase the contrast between tissue and calcification and normalize the different types of exams. Multiple architectures were tested and the best achieved 96.9% F1-Score, 97.1% recall and 98.3% in the OrcaScore Dataset. © ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering 2024.
2024
Autores
Pereira, P; Rocha, J; Pedrosa, J; Mendonça, AM;
Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024
Abstract
Chest X-Ray (CXR), plays a vital role in diagnosing lung and heart conditions, but the high demand for CXR examinations poses challenges for radiologists. Automatic support systems can ease this burden by assisting radiologists in the image analysis process. While Deep Learning models have shown promise in this task, concerns persist regarding their complexity and decision-making opacity. To address this, various visual explanation techniques have been developed to elucidate the model reasoning, some of which have received significant attention in literature and are widely used such as GradCAM. However, it is unclear how different explanations methods perform and how to quantitatively measure their performance, as well as how that performance may be dependent on the model architecture used and the dataset characteristics. In this work, two widely used deep classification networks - DenseNet121 and ResNet50 - are trained for multi-pathology classification on CXR and visual explanations are then generated using GradCAM, GradCAM++, EigenGrad-CAM, Saliency maps, LRP and DeepLift. These explanations methods are then compared with radiologist annotations using previously proposed explainability evaluations metrics - intersection over union and hit rate. Furthermore, a novel method to convey visual explanations in the form of radiological written reports is proposed, allowing for a clinically-oriented explainability evaluation metric - zones score. It is shown that Grad-CAM++ and Saliency methods offer the most accurate explanations and that the effectiveness of visual explanations is found to vary based on the model and corresponding input size. Additionally, the explainability performance across different CXR datasets is evaluated, highlighting that the explanation quality depends on the dataset's characteristics and annotations.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.