2024
Autores
Abay, SG; Lima, F; Geurts, L; Camara, J; Pedrosa, J; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Low-cost smartphone-compatible portable ophthalmoscopes can capture visuals of the patient's retina to screen several ophthalmological diseases like glaucoma. The images captured have lower quality and resolution than standard retinography devices but enough for glaucoma screening. Small videos are captured to improve the chance of inspecting the eye properly; however, those videos may not always have enough quality for screening glaucoma, and the patient needs to repeat the inspection later. In this paper, a method for automatic assessment of the quality of videos captured using the D-Eye lens is proposed and evaluated with a personal dataset with 539 videos. Based on two methods developed for retina localization on the images/frames, the Circle Hough Transform method with a precision of 78,12% and the YOLOv7 method with a precision of 99,78%, the quality assessment method automatically decides on the quality of the video by measuring the number of frames of good-quality in each video, according to the chosen threshold. © 2024 Elsevier B.V.. All rights reserved.
2024
Autores
Couto, D; Davies, S; Sousa, J; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Interferometric Synthetic Aperture Radar (InSAR) revolutionizes surface study by measuring precise ground surface changes. Phase unwrapping, a key challenge in InSAR, involves removing ambiguity in measured phase. Deep learning algorithms like Generative Adversarial Networks (GANs) offer a potential solution for simplifying the unwrapping process. This work evaluates GANs for InSAR phase unwrapping, replacing SNAPHU with GANs. GANs achieve significantly faster processing times (2.38 interferograms per minute compared to SNAPHU's 0.78 interferograms per minute) with minimal quality degradation. A comparison of SBAS results shows that approximately 84% of GANs points are within 3 millimeters of SNAPHU. These results represent a significant advancement in phase unwrapping methods. While this experiment does not declare a definitive winner, it demonstrates that GANs are a viable alternative in certain scenarios and may replace SNAPHU as the preferred unwrapping method. © 2024 The Author(s). Published by Elsevier B.V.
2024
Autores
Teixeira, I; Sousa, J; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Port wine plays a crucial role in the Douro region in Portugal, providing significant economic support and international recognition. The efficient and sustainable management of the wine sector is of utmost importance, which includes the verification of abandoned vineyard plots in the region, covering an area of approximately 250,000 hectares. The manual analysis of aerial images for this purpose is a laborious and resource-intensive task. However, several artificial intelligence (AI) methods are available to assist in this process. This paper presents the development of AI models, specifically deep learning models, for the automatic detection of abandoned vineyards using aerial images. A private image database was expanded, containing a larger collection of images with both abandoned and non-abandoned vineyards. Multiple AI algorithms, including Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), were explored for classification. The results, particularly with the ViTs approach, achieved high accuracy and demonstrated the effectiveness of automatic detection, with the ViT models achieving an accuracy of 99.37% and an F1-score of 98.92%. The proposed AI models provide valuable tools for monitoring and decision-making related to vineyard abandonment. © 2024 The Author(s). Published by Elsevier B.V.
2024
Autores
Fernandes, R; Pessoa, A; Nogueira, J; Paiva, A; Pacal, I; Salgado, M; Cunha, A;
Publicação
Procedia Computer Science
Abstract
Wireless capsule endoscopy (WCE) has revolutionized the field of gastrointestinal examinations, being MedtronicTM WCE one of the most used in clinics. In those WCE videos, medical experts use RAPID READERTM tool to annotate findings in videos. However, the frame annotations are not available in an open format and, when exported, they have different resolutions and some annotated artefacts that make difficult their localization in the original videos. This difficult the use of WCE medical experts' annotations in the research of new computed-aid diagnostic (CAD) methods. In this paper, we propose a methodology to compare image similarities and evaluate it in a private MedtronicTM WCE SB3 video dataset to automatically identify the annotated frames in the videos. We used state-of-the-art pre-trained convolutional neural network (CNN) models, including MobileNet, InceptionResNetv2, ResNet50v2, VGG19, VGG16, ResNet101v2, ResNet152v2, and DenseNet121, as frame features extractors and compared them with the Euclidean distance. We evaluated the methodology performance on a private dataset consisting of 100 WCE videos, totalling 905 frames. The experimental results showed promising performance. The MobileNet model achieved an accuracy of 94% for identifying the first match, while the top 5, top 10, and top 20 matches were identified with accuracies of 94%, 94%, and 98%, respectively. The VGG16 and ResNet50v2 models also demonstrated strong performance, achieving accuracies ranging from 88% to 93% for various match positions. These results highlight the effectiveness of our proposed methodology in localizing target frames and even identifying similar frames very use useful for training data-driven models in CAD research. The code utilized in this experiment is available on the Github† © 2024 The Author(s). Published by Elsevier B.V.
2023
Autores
Neto, A; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;
Publicação
CENTERIS 2023 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2023, Porto, Portugal, November 8-10, 2023.
Abstract
Metaplasia detection in upper gastrointestinal endoscopy is crucial to identify patients at higher risk of gastric cancer. Deep learning algorithms can be useful for detecting and localising these lesions during an endoscopy exam. However, to train these types of models, a lot of annotated data is needed, which can be a problem in the medical field. To overcome this, data augmentation techniques are commonly applied to increase the dataset's variability but need to be adapted to the specificities of the application scenario. In this study, we discuss the potential benefits and identify four key research challenges of a promising data augmentation approach, namely image combination methodologies, such as CutMix, for metaplasia detection and localisation in gastric endoscopy imaging modalities. © 2024 The Author(s).
2024
Autores
Carneiro, GA; Cunha, A; Sousa, J;
Publicação
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.