2022
Authors
Mendes, J; Peres, E; dos Santos, FN; Silva, N; Silva, R; Sousa, JJ; Cortez, I; Morais, R;
Publication
AGRICULTURE-BASEL
Abstract
Proximity sensing approaches with a wide array of sensors available for use in precision viticulture contexts can nowadays be considered both well-know and mature technologies. Still, several in-field practices performed throughout different crops rely on direct visual observation supported on gained experience to assess aspects of plants' phenological development, as well as indicators relating to the onset of common plagues and diseases. Aiming to mimic in-field direct observation, this paper presents VineInspector: a low-cost, self-contained and easy-to-install system, which is able to measure microclimatic parameters, and also to acquire images using multiple cameras. It is built upon a stake structure, rendering it suitable for deployment across a vineyard. The approach through which distinguishable attributes are detected, classified and tallied in the periodically acquired images, makes use of artificial intelligence approaches. Furthermore, it is made available through an IoT cloud-based support system. VineInspector was field-tested under real operating conditions to assess not only the robustness and the operating functionality of the hardware solution, but also the AI approaches' accuracy. Two applications were developed to evaluate Vinelnspector's consistency while a viticulturist' assistant in everyday practices. One was intended to determine the size of the very first grapevines' shoots, one of the required parameters of the well known 3-10 rule to predict primary downy mildew infection. The other was developed to tally grapevine moth males captured in sex traps. Results show that VineInspector is a logical step in smart proximity monitoring by mimicking direct visual observation from experienced viticulturists. While the latter traditionally are responsible for a set of everyday practices in the field, these are time and resource consuming. VineInspector was proven to be effective in two of these practices, performing them automatically. Therefore, it enables both the continuous monitoring and assessment of a vineyard's phenological development in a more efficient manner, making way to more assertive and timely practices against pests and diseases.
2022
Authors
Padua, L; Antao Geraldes, AM; Sousa, JJ; Rodrigues, MA; Oliveira, V; Santos, D; Miguens, MFP; Castro, JP;
Publication
DRONES
Abstract
Efficient detection and monitoring procedures of invasive plant species are required. It is of crucial importance to deal with such plants in aquatic ecosystems, since they can affect biodiversity and, ultimately, ecosystem function and services. In this study, it is intended to detect water hyacinth (Eichhornia crassipes) using multispectral data with different spatial resolutions. For this purpose, high-resolution data (<0.1 m) acquired from an unmanned aerial vehicle (UAV) and coarse-resolution data (10 m) from Sentinel-2 MSI were used. Three areas with a high incidence of water hyacinth located in the Lower Mondego region (Portugal) were surveyed. Different classifiers were used to perform a pixel-based detection of this invasive species in both datasets. From the different classifiers used, the results were achieved by the random forest classifiers stand-out (overall accuracy (OA): 0.94). On the other hand, support vector machine performed worst (OA: 0.87), followed by Gaussian naive Bayes (OA: 0.88), k-nearest neighbours (OA: 0.90), and artificial neural networks (OA: 0.91). The higher spatial resolution from UAV-based data enabled us to detect small amounts of water hyacinth, which could not be detected in Sentinel-2 data. However, and despite the coarser resolution, satellite data analysis enabled us to identify water hyacinth coverage, compared well with a UAV-based survey. Combining both datasets and even considering the different resolutions, it was possible to observe the temporal and spatial evolution of water hyacinth. This approach proved to be an effective way to assess the effects of the mitigation/control measures taken in the study areas. Thus, this approach can be applied to detect invasive species in aquatic environments and to monitor their changes over time.
2022
Authors
Jurado, JM; Jimenez-Perez, JR; Padua, L; Feito, FR; Sousa, JJ;
Publication
COMPUTERS & GRAPHICS-UK
Abstract
Modelling of material appearance from reflectance measurements has become increasingly prevalent due to the development of novel methodologies in Computer Graphics. In the last few years, some advances have been made in measuring the light-material interactions, by employing goniometers/reflectometers under specific laboratory's constraints. A wide range of applications benefit from data-driven appearance modelling techniques and material databases to create photorealistic scenarios and physically based simulations. However, important limitations arise from the current material scanning process, mostly related to the high diversity of existing materials in the real-world, the tedious process for material scanning and the spectral characterisation behaviour. Consequently, new approaches are required both for the automatic material acquisition process and for the generation of measured material databases. In this study, a novel approach for material appearance acquisition using hyperspectral data is proposed. A dense 3D point cloud filled with spectral data was generated from the images obtained by an unmanned aerial vehicle (UAV) equipped with an RGB camera and a hyperspectral sensor. The observed hyperspectral signatures were used to recognise natural and artificial materials in the 3D point cloud according to spectral similarity. Then, a parametrisation of Bidirectional Reflectance Distribution Function (BRDF) was carried out by sampling the BRDF space for each material. Consequently, each material is characterised by multiple samples with different incoming and outgoing angles. Finally, an analysis of BRDF sample completeness is performed considering four sunlight positions and 16x16 resolution for each material. The results demonstrated the capability of the used technology and the effectiveness of our method to be used in applications such as spectral rendering and real-word material acquisition and classification. (C) 2021 The Authors. Published by Elsevier Ltd.
2022
Authors
Sousa, JJ; Toscano, P; Matese, A; Di Gennaro, SF; Berton, A; Gatti, M; Poni, S; Padua, L; Hruska, J; Morais, R; Peres, E;
Publication
SENSORS
Abstract
Hyperspectral aerial imagery is becoming increasingly available due to both technology evolution and a somewhat affordable price tag. However, selecting a proper UAV + hyperspectral sensor combo to use in specific contexts is still challenging and lacks proper documental support. While selecting an UAV is more straightforward as it mostly relates with sensor compatibility, autonomy, reliability and cost, a hyperspectral sensor has much more to be considered. This note provides an assessment of two hyperspectral sensors (push-broom and snapshot) regarding practicality and suitability, within a precision viticulture context. The aim is to provide researchers, agronomists, winegrowers and UAV pilots with dependable data collection protocols and methods, enabling them to achieve faster processing techniques and helping to integrate multiple data sources. Furthermore, both the benefits and drawbacks of using each technology within a precision viticulture context are also highlighted. Hyperspectral sensors, UAVs, flight operations, and the processing methodology for each imaging type' datasets are presented through a qualitative and quantitative analysis. For this purpose, four vineyards in two countries were selected as case studies. This supports the extrapolation of both advantages and issues related with the two types of hyperspectral sensors used, in different contexts. Sensors' performance was compared through the evaluation of field operations complexity, processing time and qualitative accuracy of the results, namely the quality of the generated hyperspectral mosaics. The results shown an overall excellent geometrical quality, with no distortions or overlapping faults for both technologies, using the proposed mosaicking process and reconstruction. By resorting to the multi-site assessment, the qualitative and quantitative exchange of information throughout the UAV hyperspectral community is facilitated. In addition, all the major benefits and drawbacks of each hyperspectral sensor regarding its operation and data features are identified. Lastly, the operational complexity in the context of precision agriculture is also presented.
2022
Authors
Carneiro, GA; Padua, L; Peres, E; Morais, R; Sousa, JJ; Cunha, A;
Publication
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)
Abstract
The grapevine variety plays an important role in wine chain production, thus identifying it is crucial for control activities. However, the specialists responsible for identifying the different varieties, mainly through visual analysis, are disappearing. In this scenario, Deep Learning (DL) classification techniques become a possible solution to handle professionals' scarcity. Nevertheless, previous experiments show that trained classification models use the background information to make decisions, which should be avoided. In this paper, we present a study allowing the assessment of removing background regions from the grapevine images in the improvement classification using DL models. The Xception model is trained with a normal dataset and its segmented version. The Local Interpretable Model-Agnostic Explanations (LIME), Grad-CAM, and Grad-CAM++ approaches are used to visualize the segmentation impact in classification decisions. F1-score of 0.92 and 0.94 were achieved, respectively, for segmented-dataset and normal-dataset trained models. Despite the model trained with the segmented-dataset to achieve a worse performance, the Explainable Artificial Intelligence (XAI) approaches showed that it looks into more reliable regions when making decisions.
2022
Authors
Carneiro, GA; Padua, L; Peres, E; Morais, R; Sousa, JJ; Cunha, A;
Publication
2022 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2022)
Abstract
The grape variety plays an important role in the wine production chain, thus identifying it is crucial for production control. Ampelographers, professionals who identify grape varieties through plant visual analysis, are scarce, and molecular markers are expansive to identify grape varieties on a large scale. In this context, Deep Learning models become an effective way to handle ampelographers scarcity. In this work, we explore the benefit of using deep learning vision transformers architecture relative to conventional CNN to identify 12 grapevine varieties using leaf-centred RGB images acquired in the field. We train an Xception model as a baseline and four different configurations of the ViT_B model. The best model achieved 0.96 of F1-score, outperforming the state-of-the-art convolutional-based model in the used dataset.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.