2022
Autores
Renna, F; Martins, M; Neto, A; Cunha, A; Libanio, D; Dinis-Ribeiro, M; Coimbra, M;
Publicação
DIAGNOSTICS
Abstract
Stomach cancer is the third deadliest type of cancer in the world (0.86 million deaths in 2017). In 2035, a 20% increase will be observed both in incidence and mortality due to demographic effects if no interventions are foreseen. Upper GI endoscopy (UGIE) plays a paramount role in early diagnosis and, therefore, improved survival rates. On the other hand, human and technical factors can contribute to misdiagnosis while performing UGIE. In this scenario, artificial intelligence (AI) has recently shown its potential in compensating for the pitfalls of UGIE, by leveraging deep learning architectures able to efficiently recognize endoscopic patterns from UGIE video data. This work presents a review of the current state-of-the-art algorithms in the application of AI to gastroscopy. It focuses specifically on the threefold tasks of assuring exam completeness (i.e., detecting the presence of blind spots) and assisting in the detection and characterization of clinical findings, both gastric precancerous conditions and neoplastic lesion changes. Early and promising results have already been obtained using well-known deep learning architectures for computer vision, but many algorithmic challenges remain in achieving the vision of AI-assisted UGIE. Future challenges in the roadmap for the effective integration of AI tools within the UGIE clinical practice are discussed, namely the adoption of more robust deep learning architectures and methods able to embed domain knowledge into image/video classifiers as well as the availability of large, annotated datasets.
2022
Autores
Cardoso, AS; Renna, F; Moreno-Llorca, R; Alcaraz-Segura, D; Tabik, S; Ladle, RJ; Vaz, AS;
Publicação
ECOSYSTEM SERVICES
Abstract
Crowdsourced social media data has become popular for assessing cultural ecosystem services (CES). Nevertheless, social media data analyses in the context of CES can be time consuming and costly, particularly when based on the manual classification of images or texts shared by people. The potential of deep learning for automating the analysis of crowdsourced social media content is still being explored in CES research. Here, we use freely available deep learning models, i.e., Convolutional Neural Networks, for automating the classification of natural and human (e.g., species and human structures) elements relevant to CES from Flickr and Wikiloc images. Our approach is developed for Peneda-Ger <^>es (Portugal) and then applied to Sierra Nevada (Spain). For Peneda-Ger <^>es, image classification showed promising results (F1-score ca. 80%), highlighting a preference for aesthetics appreciation by social media users. In Sierra Nevada, even though model performance decreased, it was still satisfactory (F1-score ca. 60%), indicating a predominance of people's pursuit for cultural heritage and spiritual enrichment. Our study shows great potential from deep learning to assist in the automated classification of human-nature interactions and elements from social media content and, by extension, for supporting researchers and stakeholders to decode CES distributions, benefits, and values.
2022
Autores
Lopes, I; Silva, A; Coimbra, MT; Ribeiro, MD; Libânio, D; Renna, F;
Publicação
44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society, EMBC 2022, Glasgow, Scotland, United Kingdom, July 11-15, 2022
Abstract
This work focuses on detection of upper gas-trointestinal (GI) landmarks, which are important anatomical areas of the upper GI tract digestive system that should be photodocumented during endoscopy to guarantee a complete examination. The aim of this work consisted in testing new automatic algorithms, specifically based on convolutional neural network (CNN) systems, able to detect upper GI landmarks, that can help to avoid the presence of blind spots during esophagogastroduodenoscopy. We tested pre-trained CNN architectures, such as the ResNet-50 and VGG-16, in conjunction with different training approaches, including the use of class weights, batch normalization, dropout, and data augmentation. The ResNet-50 model trained with class weights was the best performing CNN, achieving an accuracy of 71.79% and a Mathews Correlation Coefficient (MCC) of 65.06%. The combination of supervised and unsupervised learning was also explored to increase classification performance. In particular, convolutional autoencoder architectures trained with unlabeled GI images were used to extract representative features. Such features were then concatenated with those extracted by the pre-trained ResNet-50 architecture. This approach achieved a classification accuracy of 72.45% and an MCC of 65.08%. Clinical relevance - Esophagogastroduodenoscopy (EGD) photodocumentation is essential to guarantee that all areas of the upper GI system are examined avoiding blind spots. This work has the objective to help the EGD photodocumentation monitorization by testing new CNN-based systems able to detect EGD landmarks.
2023
Autores
Elola, A; Aramendi, E; Oliveira, J; Renna, F; Coimbra, MT; Reyna, MA; Sameni, R; Clifford, GD; Rad, AB;
Publicação
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
Abstract
Objective: Murmurs are abnormal heart sounds, identified by experts through cardiac auscultation. The murmur grade, a quantitative measure of the murmur intensity, is strongly correlated with the patient's clinical condition. This work aims to estimate each patient's murmur grade (i.e., absent, soft, loud) from multiple auscultation location phonocardiograms (PCGs) of a large population of pediatric patients from a low-resource rural area. Methods: The Mel spectrogram representation of each PCG recording is given to an ensemble of 15 convolutional residual neural networks with channel-wise attention mechanisms to classify each PCG recording. The final murmur grade for each patient is derived based on the proposed decision rule and considering all estimated labels for available recordings. The proposed method is cross-validated on a dataset consisting of 3456 PCG recordings from 1007 patients using a stratified ten-fold cross-validation. Additionally, the method was tested on a hidden test set comprised of 1538 PCG recordings from 442 patients. Results: The overall cross-validation performances for patient-level murmur gradings are 86.3% and 81.6% in terms of the unweighted average of sensitivities and F1-scores, respectively. The sensitivities (and F1-scores) for absent, soft, and loud murmurs are 90.7% (93.6%), 75.8% (66.8%), and 92.3% (84.2%), respectively. On the test set, the algorithm achieves an unweighted average of sensitivities of 80.4% and an F1-score of 75.8%. Conclusions: This study provides a potential approach for algorithmic pre-screening in low-resource settings with relatively high expert screening costs. Significance: The proposed method represents a significant step beyond detection of murmurs, providing characterization of intensity, which may provide an enhanced classification of clinical outcomes.
2022
Autores
Reyna, MA; Kiarashi, Y; Elola, A; Oliveira, J; Renna, F; Gu, A; Perez Alday, EA; Sadr, N; Sharma, A; Silva Mattos, Sd; Coimbra, MT; Sameni, R; Rad, AB; Clifford, GD;
Publicação
Computing in Cardiology, CinC 2022, Tampere, Finland, September 4-7, 2022
Abstract
The George B. Moody PhysioNet Challenge 2022 explored the detection of abnormal heart function from phonocardiogram (PCG) recordings. Although ultrasound imaging is becoming more common for investigating heart defects, the PCG still has the potential to assist with rapid and low-cost screening, and the automated annotation of PCG recordings has the potential to further improve access. Therefore, for this Challenge, we asked participants to design working, open-source algorithms that use PCG recordings to identify heart murmurs and clinical outcomes. This Challenge makes several innovations. First, we sourced 5272 PCG recordings from 1568 patients in Brazil, providing high-quality data for an underrepresented population. Second, we required the Challenge teams to submit working code for training and running their models, improving the reproducibility and reusability of the algorithms. Third, we devised a cost-based evaluation metric that reflects the costs of screening, treatment, and diagnostic errors, facilitating the development of more clinically relevant algorithms. A total of 87 teams submitted 779 algorithms during the Challenge. These algorithms represent a diversity of approaches from both academia and industry for detecting abnormal cardiac function from PCG recordings. © 2022 Creative Commons.
2022
Autores
Baeza, R; Santos, C; Nunes, F; Mancio, J; Carvalho, RF; Coimbra, MT; Renna, F; Pedrosa, J;
Publicação
Wireless Mobile Communication and Healthcare - 11th EAI International Conference, MobiHealth 2022, Virtual Event, November 30 - December 2, 2022, Proceedings
Abstract
The pericardium is a thin membrane sac that covers the heart. As such, the segmentation of the pericardium in computed tomography (CT) can have several clinical applications, namely as a preprocessing step for extraction of different clinical parameters. However, manual segmentation of the pericardium can be challenging, time-consuming and subject to observer variability, which has motivated the development of automatic pericardial segmentation methods. In this study, a method to automatically segment the pericardium in CT using a U-Net framework is proposed. Two datasets were used in this study: the publicly available Cardiac Fat dataset and a private dataset acquired at the hospital centre of Vila Nova de Gaia e Espinho (CHVNGE). The Cardiac Fat database was used for training with two different input sizes - 512 512 and 256 256. A superior performance was obtained with the 256 256 image size, with a mean Dice similarity score (DCS) of 0.871 ± 0.01 and 0.807 ± 0.06 on the Cardiac Fat test set and the CHVNGE dataset, respectively. Results show that reasonable performance can be achieved with a small number of patients for training and an off-the-shelf framework, with only a small decrease in performance in an external dataset. Nevertheless, additional data will increase the robustness of this approach for difficult cases and future approaches must focus on the integration of 3D information for a more accurate segmentation of the lower pericardium. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.