2023
Authors
Teixeira, AC; Batista, L; Carneiro, G; Cunha, A; Sousa, JJ;
Publication
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
Abstract
Public lighting is crucial for maintaining the safety and well-being of communities. Current inspection methods involve examining the luminaires during the day, but this approach has drawbacks, including energy consumption, delay in detecting issues, and high costs and time investment. Utilising deep learning based automatic detection is an advanced method that can be used for identifying and locating issues in this field. This study aims to use deep learning to automatically detect burnt-out street lights, using Seville (Spain) as a case study. The study uses high-resolution night time imagery from the JL1-3B satellite to create a dataset called NLight, which is then divided into three subsets: NL1, NL2, and NT. The NL1 and NL2 datasets are used to train and evaluate YOLOv5 and YOLOv7 segmentation models for instance segmentation of streets. And then, distance outliers were detected to find the lights off. Finally, the NT dataset is used to evaluate the effectiveness of the proposed methodology. The study finds that YOLOv5 achieved a mask mAP of 57.7%, and the proposed methodology had a precision of 30.8% and a recall of 28.3%. The main goal of this work is accomplished, but there is still space for future work to improve the methodology.
2023
Authors
Teixeira, AC; Carneiro, G; Filipe, V; Cunha, A; Sousa, JJ;
Publication
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
Abstract
Public lighting plays a very important role for society's safety and quality of life. The identification of faults in public lighting is essential for the maintenance and prevention of safety. Traditionally, this task depends on human action, through checking during the day, representing expenditure and waste of energy. Automatic detection with deep learning is an innovative solution that can be explored for locating and identifying of this kind of problem. In this study, we present a first approach, composed of several steps, intending to obtain the segmentation of public lighting, using Seville (Spain) as case study. A dataset called NLight was created from a nighttime image taken by the JL1-3B satellite, and four U-Net and FPN architectures were trained with different backbones to segment part of the NLight. The U-Net with InceptionResNetv2 proved to be the model with the best performance, obtained 761 of 815, correct locations (93.4%). This model was used to predict the segmentation of the remaining dataset. This study provides the location of lamps so that we can identify patterns and possible lighting failures in the future.
2023
Authors
Teixeira, AC; Carneiro, G; Morais, R; Sousa, JJ; Cunha, A;
Publication
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
Abstract
The grape moth is a common pest that affects grapevines by consuming both fruit and foliage, rendering grapes deformed and unsellable. Integrated pest management for the grape moth heavily relies on pheromone traps, which serve a crucial function by identifying and tracking adult moth populations. This information is then used to determine the most appropriate time and method for implementing other control techniques. This study aims to find the best method for detecting small insects. We evaluate the following recent YOLO models: v5, v6, v7, and v8 for detecting and counting grape moths in insect traps. The best performance was achieved by YOLOv8, with an average precision of 92.4% and a counting error of 8.1%.
2023
Authors
Carneiro, G; Teixeira, A; Cunha, A; Sousa, J;
Publication
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
Abstract
In this study, we evaluated the use of small pre-trained 3D Convolutional Neural Networks (CNN) on land use and land cover (LULC) slide-window-based classification. We pre-trained the small models in a dataset with origin in the Eurosat dataset and evaluated the benefits of the transfer-learning plus fine-tuning for four different regions using Sentinel-2 L1C imagery (bands of 10 and 20m of spatial resolution), comparing the results to pre-trained models and trained from scratch. The models achieved an F1 Score of between 0.69-0.80 without significative change when pre-training the model. However, for small datasets, pre-training the model improved the classification by up to 3%.
2023
Authors
Carneiro, GA; Texeira, A; Morais, R; Sousa, JJ; Cunha, A;
Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II
Abstract
Grape varieties play an important role in wine's production chain, its identification is crucial for controlling and regulating the production. Nowadays, two techniques are widely used, ampelography and molecular analysis. However, there are problems with both of them. In this scenario, Deep Learning classifiers emerged as a tool to automatically classify grape varieties. A problem with the classification of on-field acquired images is that there is a lot of information unrelated to the target classification. In this study, the use of segmentation before classification to remove such unrelated information was analyzed. We used two grape varieties identification datasets to fine-tune a pre-trained EfficientNetV2S. Our results showed that segmentation can slightly improve classification performance if only unrelated information is removed.
2023
Authors
Carneiro, G; Neto, A; Teixeira, A; Cunha, A; Sousa, J;
Publication
IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM
Abstract
The grapevine variety identification is important in the wine's production chain since it is related to its quality, authenticity and singularity. In this study, we addressed the data augmentation approach to identify grape varieties with images acquired in-field. We tested the static transformations, RandAugment, and Cutmix methods. Our results showed that the best result was achieved by the Static method generating 5 images per sample (F1 = 0.89), however without a significative difference if compared with RandAugment generating 2 images. The worst performance was achieved by CutMix (F1 = 0.86).
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.