Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Joaquim João Sousa

2019

3D Surface velocity retrieval of mountain glacier using an offset tracking technique applied to ascending and descending SAR constellation data: a case study of the Yiga Glacier

Authors
Wang, Q; Fan, JH; Zhou, W; Tong, LQ; Guo, ZC; Liu, G; Yuan, WL; Sousa, JJ; Perski, Z;

Publication
INTERNATIONAL JOURNAL OF DIGITAL EARTH

Abstract
COSMO-SkyMed is a constellation of four X-band high-resolution radar satellites with a minimum revisit period of 12 hours. These satellites can obtain ascending and descending synthetic aperture radar (SAR) images with very similar periods for use in the three-dimensional (3D) inversion of glacier velocities. In this paper, based on ascending and descending COSMO-SkyMed data acquired at nearly the same time, the surface velocity of the Yiga Glacier, located in the Jiali County, Tibet, China, is estimated in four directions using an offset tracking technique during the periods of 16 January to 3 February 2017 and 1 February to 19 February 2017. Through the geometrical relationships between the measurements and the SAR images, the least square method is used to retrieve the 3D components of the glacier surface velocity in the eastward, northward and upward directions. The results show that applying the offset tracking technique to COSMO-SkyMed images can be used to derive the true 3D velocity of a glacier's surface. During the two periods, the Yiga Glacier had a stable velocity, and the maximum surface velocity, 2.4 m/d, was observed in the middle portion of the glacier, which corresponds to the location of the steepest slope.

2018

Deformation monitoring of dam infrastructures via spaceborne MT-InSAR. The case of La Viñuela (Málaga, southern Spain)

Authors
Ruiz Armenteros, AM; Lazecky, M; Hlavácová, I; Bakon, M; Manuel Delgado, J; Sousa, JJ; Lamas Fernández, F; Marchamalo, M; Caro Cuenca, M; Papco, J; Perissin, D;

Publication
Procedia Computer Science

Abstract
Dams require continuous security and monitoring programs, integrated with visual inspection and testing in dam surveillance programs. New approaches for dam monitoring focus on multi-sensor integration, taking into account emerging technologies such as GNSS, optic fiber, TLS, InSAR techniques, GBInSAR, GPR, that can be used as complementary data in dam monitoring, eliciting causes of dam deformation that cannot be assessed with traditional techniques. This paper presents a Multi-temporal InSAR (MT-InSAR) monitoring of La Viñuela dam (Málaga, Spain), a 96 m height earth-fill dam built from 1982 to 1989. The presented MT-InSAR monitoring system comprises three C-band radar (~5,7 cm wavelength) datasets from the European satellites ERS-1/2 (1992-2000), Envisat (2003-2008), and Sentinel-1A/B (2014-2018). ERS-1/2 and Envisat datasets were processed using StaMPS. In the case of Sentinel-1A/B, two different algorithms were applied, SARPROZ and ISCE-SALSIT, allowing the comparison of the estimated LOS velocity pattern. The obtained results confirm that LaViñuela dam is deforming since its construction, as an earth-fill dam. Maximum deformation rates were measured in the initial period (1992-2000), being around -7 mm/yr (LOS direction) on the coronation of the dam. In the period covered by the Envisat dataset (2003-2008), the average deforming pattern was lower, of the order of -4 mm/yr. Sentinel-1A/B monitoring confirms that the deformation is still active in the period 2014-2018 in the central-upper part of the dam, with maximums of velocity reaching -6 mm/yr. SARPROZ and ISCE-SALSIT algorithms provide similar results. It was concluded that MT-InSAR techniques can support the development of new and more effective means of monitoring and analyzing the health of dams complementing actual dam surveillance systems. © 2018 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license.

2019

UAV-Based Automatic Detection and Monitoring of Chestnut Trees

Authors
Marques, P; Padua, L; Adao, T; Hruska, J; Peres, E; Sousa, A; Sousa, JJ;

Publication
REMOTE SENSING

Abstract
Unmanned aerial vehicles have become a popular remote sensing platform for agricultural applications, with an emphasis on crop monitoring. Although there are several methods to detect vegetation through aerial imagery, these remain dependent of manual extraction of vegetation parameters. This article presents an automatic method that allows for individual tree detection and multi-temporal analysis, which is crucial in the detection of missing and new trees and monitoring their health conditions over time. The proposed method is based on the computation of vegetation indices (VIs), while using visible (RGB) and near-infrared (NIR) domain combination bands combined with the canopy height model. An overall segmentation accuracy above 95% was reached, even when RGB-based VIs were used. The proposed method is divided in three major steps: (1) segmentation and first clustering; (2) cluster isolation; and (3) feature extraction. This approach was applied to several chestnut plantations and some parameterssuch as the number of trees present in a plantation (accuracy above 97%), the canopy coverage (93% to 99% accuracy), the tree height (RMSE of 0.33 m and R-2 = 0.86), and the crown diameter (RMSE of 0.44 m and R-2 = 0.96)were automatically extracted. Therefore, by enabling the substitution of time-consuming and costly field campaigns, the proposed method represents a good contribution in managing chestnut plantations in a quicker and more sustainable way.

2019

mySense: A comprehensive data management environment to improve precision agriculture practices

Authors
Morais, R; Silva, N; Mendes, J; Adao, T; Padua, L; Lopez Riquelme, J; Pavon Pulido, N; Sousa, JJ; Peres, E;

Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Over the last few years, an extensive set of technologies have been systematically included in precision agriculture (PA) and also in precision viticulture (PV) practices, as tools that allow efficient monitoring of nearly any parameter to achieve sustainable crop management practices and to increase both crop yield and quality. However, many technologies and standards are not yet included on those practices. Therefore, potential benefits that may result from putting together agronomic knowledge with electronics and computer technologies are still not fully accomplished. Both emergent and established paradigms, such as the Internet of Everything (IoE), Internet of Things (IoT), cloud and fog computing, together with increasingly cheaper computing technologies - with very low power requirements and a diversity of wireless technologies, available to exchange data with increased efficiency - and intelligent systems, have evolved to a level where it is virtually possible to expeditiously create and deploy any required monitoring solution. Pushed by all of these technological trends and recent developments, data integration has emerged as the layer between crops and knowledge needed to efficiently manage it. In this paper, the mySense environment is presented, aimed to systematize data acquisition procedures to address common PA/PV issues. mySense builds over a 4-layer technological structure: sensor and sensor nodes, crop field and sensor networks, cloud services and support to front-end applications. It makes available a set of free tools based on the Do-It-Yourself (DIY) concept and enables the use of Arduino (R) and Raspberry Pi (RN) low-cost platforms to quickly prototype a complete monitoring application. Field experiments provide compelling evidences that mySense environment represents an important step forward towards Smart Farming, by enabling the use of low-cost, fast deployment, integrated and transparent technologies to increase PA/PV monitoring applications adoption.

2019

Procedural Modeling of Buildings Composed of Arbitrarily-Shaped Floor-Plans: Background, Progress, Contributions and Challenges of a Methodology Oriented to Cultural Heritage

Authors
Adao, T; Padua, L; Marques, P; Sousa, JJ; Peres, E; Magalhaes, L;

Publication
COMPUTERS

Abstract
Virtual models' production is of high pertinence in research and business fields such as architecture, archeology, or video games, whose requirements might range between expeditious virtual building generation for extensively populating computer-based synthesized environments and hypothesis testing through digital reconstructions. There are some known approaches to achieve the production/reconstruction of virtual models, namely digital settlements and buildings. Manual modeling requires highly-skilled manpower and a considerable amount of time to achieve the desired digital contents, in a process composed by many stages that are typically repeated over time. Both image-based and range scanning approaches are more suitable for digital preservation of well-conserved structures. However, they usually require trained human resources to prepare field operations and manipulate expensive equipment (e.g., 3D scanners) and advanced software tools (e.g., photogrammetric applications). To tackle the issues presented by previous approaches, a class of cost-effective, efficient, and scarce-data-tolerant techniques/methods, known as procedural modeling, has been developed aiming at the semi- or fully-automatic production of virtual environments composed of hollow buildings exclusively represented by outer facades or traversable buildings with interiors, either for expeditious generation or reconstruction. Despite the many achievements of the existing procedural modeling approaches, the production of virtual buildings with both interiors and exteriors composed by non-rectangular shapes (convex or concave n-gons) at the floor-plan level is still seldomly addressed. Therefore, a methodology (and respective system) capable of semi-automatically producing ontology-based traversable buildings composed of arbitrarily-shaped floor-plans has been proposed and continuously developed, and is under analysis in this paper, along with its contributions towards the accomplishment of other virtual reality (VR) and augmented reality (AR) projects/works oriented to digital applications for cultural heritage. Recent roof production-related enhancements resorting to the well-established straight skeleton approach are also addressed, as well as forthcoming challenges. The aim is to consolidate this procedural modeling methodology as a valuable computer graphics work and discuss its future directions.

2019

Using virtual scenarios to produce machine learnable environments for wildfire detection and segmentation

Authors
Adão, T; Pinho, TM; Pádua, L; Santos, N; Sousa, A; Sousa, JJ; Peres, E;

Publication
International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives

Abstract
Today's climatic proneness to extreme conditions together with human activity have been triggering a series of wildfire-related events that put at risk ecosystems, as well as animal and vegetal patrimony, while threatening dwellers nearby rural or urban areas. When intervention teams-firefighters, civil protection, police-acknowledge these events, usually they have already escalated to proportions hardly controllable mainly due wind gusts, fuel-like solo conditions, among other conditions that propitiate fire spreading. Currently, there is a wide range of camera-capable sensing systems that can be complemented with useful location data-for example, unmanned aerial systems (UAS) integrated cameras and IMU/GPS sensors, stationary surveillance systems-and processing components capable of fostering wildfire events detection and monitoring, thus providing accurate and faithful data for decision support. Precisely in what concerns to detection and monitoring, Deep Learning (DL) has been successfully applied to perform tasks involving classification and/or segmentation of objects of interest in several fields, such as Agriculture, Forestry and other similar areas. Usually, for an effective DL application, more specifically, based on imagery, datasets must rely on heavy and burdensome logistics to gather a representative problem formulation. What if putting together a dataset could be supported in customizable virtual environments, representing faithful situations to train machines, as it already occurs for human training in what regards some particular tasks (rescue operations, surgeries, industry assembling, etc.)? This work intends to propose not only a system to produce faithful virtual environments to complement and/or even supplant the need for dataset gathering logistics while eventually dealing with hypothetical proposals considering climate change events, but also to create tools for synthesizing wildfire environments for DL application. It will therefore enable to extend existing fire datasets with new data generated by human interaction and supervision, viable for training a computational entity. To that end, a study is presented to assess at which extent data virtually generated data can contribute to an effective DL system aiming to identify and segment fire, bearing in mind future developments of active monitoring systems to timely detect fire events and hopefully provide decision support systems to operational teams. © 2019 International Society for Photogrammetry and Remote Sensing.

  • 9
  • 23