Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Ana Cláudia Teixeira

2021

Optic disc and cup segmentations for glaucoma assessment using cup-to-disc ratio

Authors
Neto, A; Camera, J; Oliveira, S; Cláudia, A; Cunha, A;

Publication
Procedia Computer Science

Abstract
Glaucoma is a silent disease that shows symptoms when severe, leading to partial vision loss or irreversible blindness. Early screening permits treating patients in time. For glaucoma screening, retinal images are very important since they enable the observation of initial glaucoma lesions, which typically begins with the cupping formation in the optic disc (OD). In clinical settings, practical indicators such as Cup-to-Disc Ratio (CDR) are frequently used to evaluate the presence and stage of glaucoma. The ratio between the cup and the optic disc can be measured using the vertical or horizontal diameter, or the area of the two. Mass screening programs are limited by the high costs of specialised teams and equipment. Current deep learning (DL) methods can assist the glaucoma mass screening, lower the cost and allow it to be extended to larger populations. With DL methods in the OD and optic cup (OC) segmentation, is possible to evaluate the presence of glaucoma in the patient more quickly based on cupping formation in the OD, using CDR. In this work, is assessed the contribution of Multi-Class and Single-Class segmentation methods for glaucoma screening using the 3 types of CDR. U-Net architecture is trained using transfer learning models (Inception V3 and Inception ResNet V2) to segment the OD and OC and then evaluate glaucoma prediction based on different types of CDRs indicators. The models were trained and evaluated on main public known databases (REFUGE, RIM-ONE r3 and DRISHTI-GS). The segmentation of both OD and OC reach Dice over 0.8 and IoU above 0.7. The CDRs were computed to glaucoma assessment where was reach sensitivity above 0.8, specificity of 0.7, F1-Score around 0.7 and AUC above 0.85. Finally, conclusions of segmentation methods showing adequate performance to be used in practical glaucoma screening.

2022

USING DEEP LEARNING FOR DETECTION AND CLASSIFICATION OF INSECTS ON TRAPS

Authors
Teixeira, AC; Ribeiro, J; Neto, A; Morais, R; Sousa, JJ; Cunha, A;

Publication
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)

Abstract
Insect pests are the main cause of loss of productivity and quality in crops worldwide. Insect monitoring becomes necessary for the early detection of pests and thus avoiding the excessive use of pesticides. Automatic detection of insects attracted by traps is a form of monitoring. Modern data-driven methods present great results for object detection when representative datasets are available, but public datasets for insect detection are few and small. Pest24 public dataset is extensive, but noisy resulting in a poor detection rate. In this work, we aim to improve insect detection in the Pest24 dataset. We propose the creation of three sub-datasets selecting the highest represented classes, the highest colour discrepancy, and the one with the highest relative scale, respectively. Several Faster R-CNN and YOLOv5 architectures are explored, and the best results are achieved with the YOLOv5 with an mAP of 95.5%.

2022

A deep learning approach for automatic counting of bedbugs and grape moth

Authors
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;

Publication
CENTERIS 2022 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2022, Hybrid Event / Lisbon, Portugal, November 9-11, 2022.

Abstract

2023

A Systematic Review on Automatic Insect Detection Using Deep Learning

Authors
Teixeira, AC; Ribeiro, J; Morais, R; Sousa, JJ; Cunha, A;

Publication
AGRICULTURE-BASEL

Abstract
Globally, insect pests are the primary reason for reduced crop yield and quality. Although pesticides are commonly used to control and eliminate these pests, they can have adverse effects on the environment, human health, and natural resources. As an alternative, integrated pest management has been devised to enhance insect pest control, decrease the excessive use of pesticides, and enhance the output and quality of crops. With the improvements in artificial intelligence technologies, several applications have emerged in the agricultural context, including automatic detection, monitoring, and identification of insects. The purpose of this article is to outline the leading techniques for the automated detection of insects, highlighting the most successful approaches and methodologies while also drawing attention to the remaining challenges and gaps in this area. The aim is to furnish the reader with an overview of the major developments in this field. This study analysed 92 studies published between 2016 and 2022 on the automatic detection of insects in traps using deep learning techniques. The search was conducted on six electronic databases, and 36 articles met the inclusion criteria. The inclusion criteria were studies that applied deep learning techniques for insect classification, counting, and detection, written in English. The selection process involved analysing the title, keywords, and abstract of each study, resulting in the exclusion of 33 articles. The remaining 36 articles included 12 for the classification task and 24 for the detection task. Two main approaches-standard and adaptable-for insect detection were identified, with various architectures and detectors. The accuracy of the classification was found to be most influenced by dataset size, while detection was significantly affected by the number of classes and dataset size. The study also highlights two challenges and recommendations, namely, dataset characteristics (such as unbalanced classes and incomplete annotation) and methodologies (such as the limitations of algorithms for small objects and the lack of information about small insects). To overcome these challenges, further research is recommended to improve insect pest management practices. This research should focus on addressing the limitations and challenges identified in this article to ensure more effective insect pest management.

2022

Using deep learning for automatic detection of insects in traps

Authors
Teixeira, AC; Morais, R; Sousa, JJ; Peres, E; Cunha, A;

Publication
CENTERIS 2022 - International Conference on ENTERprise Information Systems / ProjMAN - International Conference on Project MANagement / HCist - International Conference on Health and Social Care Information Systems and Technologies 2022, Hybrid Event / Lisbon, Portugal, November 9-11, 2022.

Abstract

2023

Segmentation as a Pre-processing for Automatic Grape Moths Detection

Authors
Teixeira, AC; Carneiro, GA; Morais, R; Sousa, JJ; Cunha, A;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II

Abstract
Grape moths are a significant pest in vineyards, causing damage and losses in wine production. Pheromone traps are used to monitor grape moth populations and determine their developmental status to make informed decisions regarding pest control. Smart pest monitoring systems that employ sensors, cameras, and artificial intelligence algorithms are becoming increasingly popular due to their ability to streamline the monitoring process. In this study, we investigate the effectiveness of using segmentation as a pre-processing step to improve the detection of grape moths in trap images using deep learning models. We train two segmentation models, the U-Net architecture with ResNet18 and InceptionV3 backbonesl, and utilize the segmented and non-segmented images in the YOLOv5s and YOLOv8s detectors to evaluate the impact of segmentation on detection. Our results show that segmentation preprocessing can significantly improve detection by 3% for YOLOv5 and 1.2% for YOLOv8. These findings highlight the potential of segmentation pre-processing for enhancing insect detection in smart pest monitoring systems, paving the way for further exploration of different training methods.

  • 1
  • 3