2022
Authors
Jurado Rodriguez, D; Jurado, JM; Pauda, L; Neto, A; Munoz Salinas, R; Sousa, JJ;
Publication
COMPUTERS & GRAPHICS-UK
Abstract
Environment understanding in real-world scenarios has gained an increased interest in research and industry. The advances in data capture and processing allow a high-detailed reconstruction from a set of multi-view images by generating meshes and point clouds. Likewise, deep learning architectures along with the broad availability of image datasets bring new opportunities for the segmentation of 3D models into several classes. Among the areas that can benefit from 3D semantic segmentation is the automotive industry. However, there is a lack of labeled 3D models that can be useful for training and use as ground truth in deep learning-based methods. In this work, we propose an automatic procedure for the generation and semantic segmentation of 3D cars that were obtained from the photogrammetric processing of UAV-based imagery. Therefore, sixteen car parts are identified in the point cloud. To this end, a convolutional neural network based on the U-Net architecture combined with an Inception V3 encoder was trained in a publicly available dataset of car parts. Then, the trained model is applied to the UAV-based images and these are mapped on the photogrammetric point clouds. According to the preliminary image-based segmentation, an optimization method is developed to get a full labeled point cloud, taking advantage of the geometric and spatial features of the 3D model. The results demonstrate the method's capabilities for the semantic segmentation of car models. Moreover, the proposed methodology has the potential to be extended or adapted to other applications that benefit from 3D segmented models.
2022
Authors
Karacsony, T; Loesch-Biffar, AM; Vollmar, C; Remi, J; Noachtar, S; Cunha, JPS;
Publication
SCIENTIFIC REPORTS
Abstract
Seizure semiology is a well-established method to classify epileptic seizure types, but requires a significant amount of resources as long-term Video-EEG monitoring needs to be visually analyzed. Therefore, computer vision based diagnosis support tools are a promising approach. In this article, we utilize infrared (IR) and depth (3D) videos to show the feasibility of a 24/7 novel object and action recognition based deep learning (DL) monitoring system to differentiate between epileptic seizures in frontal lobe epilepsy (FLE), temporal lobe epilepsy (TLE) and non-epileptic events. Based on the largest 3Dvideo-EEG database in the world (115 seizures/+680,000 video-frames/427GB), we achieved a promising cross-subject validation f1-score of 0.833 +/- 0.061 for the 2 class (FLE vs. TLE) and 0.763 +/- 0.083 for the 3 class (FLE vs. TLE vs. non-epileptic) case, from 2 s samples, with an automated semi-specialized depth (Acc.95.65%) and Mask R-CNN (Acc.96.52%) based cropping pipeline to pre-process the videos, enabling a near-real-time seizure type detection and classification tool. Our results demonstrate the feasibility of our novel DL approach to support 24/7 epilepsy monitoring, outperforming all previously published methods.
2022
Authors
Morgado, L;
Publication
Video Journal of Social and Human Research
Abstract
2022
Authors
Padua, L; Duarte, L; Antao Geraldes, AM; Sousa, JJ; Castro, JP;
Publication
PLANTS-BASEL
Abstract
Monitoring invasive plant species is a crucial task to assess their presence in affected ecosystems. However, it is a laborious and complex task as it requires vast surface areas, with difficult access, to be surveyed. Remotely sensed data can be a great contribution to such operations, especially for clearly visible and predominant species. In the scope of this study, water hyacinth (Eichhornia crassipes) was monitored in the Lower Mondego region (Portugal). For this purpose, Sentinel-2 satellite data were explored enabling us to follow spatial patterns in three water channels from 2018 to 2021. By applying a straightforward and effective methodology, it was possible to estimate areas that could contain water hyacinth and to obtain the total surface area occupied by this invasive species. The normalized difference vegetation index (NDVI) was used for this purpose. It was verified that the occupation of this invasive species over the study area exponentially increases from May to October. However, this increase was not verified in 2021, which could be a consequence of the adopted mitigation measures. To provide the results of this study, the methodology was applied through a semi-automatic geographic information system (GIS) application. This tool enables researchers and ecologists to apply the same approach in monitoring water hyacinth or any other invasive plant species in similar or different contexts. This methodology proved to be more effective than machine learning approaches when applied to multispectral data acquired with an unmanned aerial vehicle. In fact, a global accuracy greater than 97% was achieved using the NDVI-based approach, versus 93% when using the machine learning approach (above 93%).
2022
Authors
Losada, N; Jorge, F; Teixeira, MS; Sousa, N; Melo, M; Bessa, M;
Publication
MARKETING AND SMART TECHNOLOGIES, VOL 1
Abstract
Immersive technologies, such as virtual reality, could be effective marketing tools for destination marketing, namely in creating place attachment prior experience the destination. Place attachment plays a significant role in behavioural intentions to visit and to recommend a destination. However, place attachment research is relatively new in the tourism context. This study seeks to empirically examine the effectiveness of Virtual Reality in creating place attachment to destinations exploring the changes in the place attachment after two moments. First, after watching a video and after having an experience in the Virtual Reality environment. Second, after the experience in the Virtual Reality environment and after the 'real' visit to a representative viewpoint in Douro region. Students belonging to Gen Z were sampled. Findings reflect that Virtual Reality has potentialities for marketing destinations.
2022
Authors
Bonfim, C; Lacet, D; Morgado, L; Pedrosa, D;
Publication
8th International Conference of the Immersive Learning Research Network, iLRN 2022, Vienna, Austria, May 30 - June 4, 2022
Abstract
A critical factor in immersive educational narratives is identification by students with the characters. In this work-in-progress analyzes the process of rendering characters from textual narratives into visual form by non-artists (i.e., instructors). We tried to match archetypes with their visual representation through the platforms: Pixton, Powtoon (both 2D) and The Sims4 (3D). The limitations of characterization can impact students' narrative immersion. As future work we intend to test with the target group and observe the improvements needed to increase identification and sense of immersion in the narrative.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.