Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by António Cunha

2024

Quality assessment of Low-cost retinal Videos for Glaucoma screening

Authors
Abay, SG; Lima, F; Geurts, L; Camara, J; Pedrosa, J; Cunha, A;

Publication
Procedia Computer Science

Abstract
Low-cost smartphone-compatible portable ophthalmoscopes can capture visuals of the patient's retina to screen several ophthalmological diseases like glaucoma. The images captured have lower quality and resolution than standard retinography devices but enough for glaucoma screening. Small videos are captured to improve the chance of inspecting the eye properly; however, those videos may not always have enough quality for screening glaucoma, and the patient needs to repeat the inspection later. In this paper, a method for automatic assessment of the quality of videos captured using the D-Eye lens is proposed and evaluated with a personal dataset with 539 videos. Based on two methods developed for retina localization on the images/frames, the Circle Hough Transform method with a precision of 78,12% and the YOLOv7 method with a precision of 99,78%, the quality assessment method automatically decides on the quality of the video by measuring the number of frames of good-quality in each video, according to the chosen threshold. © 2024 Elsevier B.V.. All rights reserved.

2024

Phase Unwrapping using ML methods

Authors
Couto, D; Davies, S; Sousa, J; Cunha, A;

Publication
Procedia Computer Science

Abstract
Interferometric Synthetic Aperture Radar (InSAR) revolutionizes surface study by measuring precise ground surface changes. Phase unwrapping, a key challenge in InSAR, involves removing ambiguity in measured phase. Deep learning algorithms like Generative Adversarial Networks (GANs) offer a potential solution for simplifying the unwrapping process. This work evaluates GANs for InSAR phase unwrapping, replacing SNAPHU with GANs. GANs achieve significantly faster processing times (2.38 interferograms per minute compared to SNAPHU's 0.78 interferograms per minute) with minimal quality degradation. A comparison of SBAS results shows that approximately 84% of GANs points are within 3 millimeters of SNAPHU. These results represent a significant advancement in phase unwrapping methods. While this experiment does not declare a definitive winner, it demonstrates that GANs are a viable alternative in certain scenarios and may replace SNAPHU as the preferred unwrapping method. © 2024 The Author(s). Published by Elsevier B.V.

2024

Automatic classification of abandonment in Douro's vineyard parcels

Authors
Teixeira, I; Sousa, J; Cunha, A;

Publication
Procedia Computer Science

Abstract
Port wine plays a crucial role in the Douro region in Portugal, providing significant economic support and international recognition. The efficient and sustainable management of the wine sector is of utmost importance, which includes the verification of abandoned vineyard plots in the region, covering an area of approximately 250,000 hectares. The manual analysis of aerial images for this purpose is a laborious and resource-intensive task. However, several artificial intelligence (AI) methods are available to assist in this process. This paper presents the development of AI models, specifically deep learning models, for the automatic detection of abandoned vineyards using aerial images. A private image database was expanded, containing a larger collection of images with both abandoned and non-abandoned vineyards. Multiple AI algorithms, including Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs), were explored for classification. The results, particularly with the ViTs approach, achieved high accuracy and demonstrated the effectiveness of automatic detection, with the ViT models achieving an accuracy of 99.37% and an F1-score of 98.92%. The proposed AI models provide valuable tools for monitoring and decision-making related to vineyard abandonment. © 2024 The Author(s). Published by Elsevier B.V.

2024

Evaluation of Deep Learning Models in Search by Example using Capsule Endoscopy Images

Authors
Fernandes, R; Pessoa, A; Nogueira, J; Paiva, A; Pacal, I; Salgado, M; Cunha, A;

Publication
Procedia Computer Science

Abstract
Wireless capsule endoscopy (WCE) has revolutionized the field of gastrointestinal examinations, being MedtronicTM WCE one of the most used in clinics. In those WCE videos, medical experts use RAPID READERTM tool to annotate findings in videos. However, the frame annotations are not available in an open format and, when exported, they have different resolutions and some annotated artefacts that make difficult their localization in the original videos. This difficult the use of WCE medical experts' annotations in the research of new computed-aid diagnostic (CAD) methods. In this paper, we propose a methodology to compare image similarities and evaluate it in a private MedtronicTM WCE SB3 video dataset to automatically identify the annotated frames in the videos. We used state-of-the-art pre-trained convolutional neural network (CNN) models, including MobileNet, InceptionResNetv2, ResNet50v2, VGG19, VGG16, ResNet101v2, ResNet152v2, and DenseNet121, as frame features extractors and compared them with the Euclidean distance. We evaluated the methodology performance on a private dataset consisting of 100 WCE videos, totalling 905 frames. The experimental results showed promising performance. The MobileNet model achieved an accuracy of 94% for identifying the first match, while the top 5, top 10, and top 20 matches were identified with accuracies of 94%, 94%, and 98%, respectively. The VGG16 and ResNet50v2 models also demonstrated strong performance, achieving accuracies ranging from 88% to 93% for various match positions. These results highlight the effectiveness of our proposed methodology in localizing target frames and even identifying similar frames very use useful for training data-driven models in CAD research. The code utilized in this experiment is available on the Github† © 2024 The Author(s). Published by Elsevier B.V.

2023

Research Challenges for Augmenting Endoscopy Image Datasets using Image Combination Methodologies

Authors
Neto, A; Libânio, D; Ribeiro, MD; Coimbra, MT; Cunha, A;

Publication
CENTERIS 2023 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2023, Porto, Portugal, November 8-10, 2023.

Abstract
Metaplasia detection in upper gastrointestinal endoscopy is crucial to identify patients at higher risk of gastric cancer. Deep learning algorithms can be useful for detecting and localising these lesions during an endoscopy exam. However, to train these types of models, a lot of annotated data is needed, which can be a problem in the medical field. To overcome this, data augmentation techniques are commonly applied to increase the dataset's variability but need to be adapted to the specificities of the application scenario. In this study, we discuss the potential benefits and identify four key research challenges of a promising data augmentation approach, namely image combination methodologies, such as CutMix, for metaplasia detection and localisation in gastric endoscopy imaging modalities. © 2024 The Author(s).

2024

Deep Learning and Machine Learning for Automatic Grapevine Varieties Identification: A Brief Review

Authors
Carneiro, GA; Cunha, A; Sousa, J;

Publication

Abstract
The Eurasian grapevine (Vitis vinifera L.) is the most widely grown horticultural crop in the world and is important for the economy of many countries. In the wine production chain, grape varieties play an important role as they directly influence the authenticity and classification of the product. Identifying the different grape varieties is therefore fundamental for quality control and inspection activities, as well as for regulating production. Currently, ampelography and molecular analysis are the main approaches to identifying grape varieties. However, both methods have limitations. Ampelography is subjective and prone to errors and is experiencing enormous difficulties as ampelographers are increasingly scarce. On the other hand, molecular analyses are very demanding in terms of cost and time. In this scenario, Deep Learning (DL) and Machine Learning (ML) methods have emerged as a classification alternative to deal with the scarcity of ampelographs and avoid molecular analyses. In this study, the most recent and current methods for identifying grapevine varieties using DL classification-based approaches are presented through a systematic literature review. The classification pipeline of the 31 studies found in the literature was described, highlighting its pros and cons. Most of the studies used DL-based models trained with leaf images acquired in a controlled environment at a maximum distance of 1.2 metres to classify grape varieties. In addition, there is a large gap between practical applications and the datasets used: a great lack of varieties, limited data acquired in the field and a lack of tests on plants under adverse conditions. Potential directions for improving this area of research were also presented.

  • 24
  • 25