2020
Authors
Mukherjee, R; Melo, M; Filipe, V; Chalmers, A; Bessa, M;
Publication
IEEE ACCESS
Abstract
Convolution Neural Network (CNN)-based object detection models have achieved unprecedented accuracy in challenging detection tasks. However, existing detection models (detection heads) trained on 8-bits/pixel/channel low dynamic range (LDR) images are unable to detect relevant objects under lighting conditions where a portion of the image is either under-exposed or over-exposed. Although this issue can be addressed by introducing High Dynamic Range (HDR) content and training existing detection heads on HDR content, there are several major challenges, such as the lack of real-life annotated HDR dataset(s) and extensive computational resources required for training and the hyper-parameter search. In this paper, we introduce an alternative backwards-compatible methodology to detect objects in challenging lighting conditions using existing CNN-based detection heads. This approach facilitates the use of HDR imaging without the immediate need for creating annotated HDR datasets and the associated expensive retraining procedure. The proposed approach uses HDR imaging to capture relevant details in high contrast scenarios. Subsequently, the scene dynamic range and wider colour gamut are compressed using HDR to LDR mapping techniques such that the salient highlight, shadow, and chroma details are preserved. The mapped LDR image can then be used by existing pre-trained models to extract relevant features required to detect objects in both the under-exposed and over-exposed regions of a scene. In addition, we also conduct an evaluation to study the feasibility of using existing HDR to LDR mapping techniques with existing detection heads trained on standard detection datasets such as PASCAL VOC and MSCOCO. Results show that the images obtained from the mapping techniques are suitable for object detection, and some of them can significantly outperform traditional LDR images.
2020
Authors
Marto, A; Melo, M; Goncalves, A; Bessa, M;
Publication
IEEE ACCESS
Abstract
Little is known about the impact of the addition of each stimulus in multisensory augmented reality experiences in cultural heritage contexts. This paper investigates the impact of different sensory conditions on a users sense of presence, enjoyment, knowledge about the cultural site, and value of the experience. Five different multisensory conditions, namely, Visual, Visual+ Audio, Visual +Smell, and Visual + Audio + Smell conditions, and regular visit referred to as None condition, were evaluated by a total of 60 random visitors distributed across the specified conditions. According to the results, the addition of particular types of stimuli created a different impact on the sense of presence subscale scores, namely, on spatial presence, involvement, and experienced realism, but did not influence the overall presence score. Overall, the results revealed that the addition of stimuli improved enjoyment and knowledge scores and did not affect the value of the experience scores. We concluded that each stimulus has a differential impact on the studied variables, demonstrating that its usage should depend on the goal of the experience: smell should be used to privilege realism and spatial presence, while audio should be adopted when the goal is to elicit involvement.
2020
Authors
Narciso, D; Melo, M; Rodrigues, S; Cunha, JPS; Bessa, M;
Publication
2020 IEEE 20TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE 2020)
Abstract
Training firefighters using Virtual Reality (VR) technology brings several benefits over traditional training methods including the reduction of costs and risks. The ability of causing the same level of stress as a real situation so that firefighters can learn how to deal with stress was investigated. An experiment aiming to study the influence that additional stimuli (heat, weight, smell and using personal protective equipment-PPE) have on user's stress level while performing a Virtual Environment (VE) designed to train firefighters was developed. Participants' stress and Heart Rate Variability (HRV) were obtained from electrocardiograms recorded during the experiment. The results suggest that wearing the PPE has the largest impact on user's stress level. The results also showed that HRV was able to evidence differences between two phases of the experiment, which suggests that it can be used to monitor users' quantified reaction to VEs.
2020
Authors
Coelho, H; Melo, M; Martins, J; Bessa, M;
Publication
Multim. Tools Appl.
Abstract
2020
Authors
Lago, AS; Dias, JP; Ferreira, HS;
Publication
Computational Science - ICCS 2020 - 20th International Conference, Amsterdam, The Netherlands, June 3-5, 2020, Proceedings, Part V
Abstract
Internet-of-Things has reshaped the way people interact with their surroundings. In a smart home, controlling the lights is as simple as speaking to a conversational assistant since everything is now Internet-connected. But despite their pervasiveness, most of the existent IoT systems provide limited out-of-the-box customization capabilities. Several solutions try to attain this issue leveraging end-user programming features that allow users to define rules to their systems, at the cost of discarding the easiness of voice interaction. However, as the number of devices increases, along with the number of household members, the complexity of managing such systems becomes a problem, including finding out why something has happened. In this work we present Jarvis, a conversational interface to manage IoT systems that attempts to address these issues by allowing users to specify time-based rules, use contextual awareness for more natural interactions, provide event management and support causality queries. A proof-of-concept was used to carry out a quasi-experiment with non-technical participants that provides evidence that such approach is intuitive enough to be used by common end-users. © Springer Nature Switzerland AG 2020.
2020
Authors
Matias, T; Correia, FF; Fritzsch, J; Bogner, J; Ferreira, HS; Restivo, A;
Publication
SOFTWARE ARCHITECTURE (ECSA 2020)
Abstract
A number of approaches have been proposed to identify service boundaries when decomposing a monolith to microservices. However, only a few use systematic methods and have been demonstrated with replicable empirical studies. We describe a systematic approach for refactoring systems to microservice architectures that uses static analysis to determine the system's structure and dynamic analysis to understand its actual behavior. A prototype of a tool was built using this approach (MonoBreaker) and was used to conduct a case study on a real-world software project. The goal was to assess the feasibility and benefits of a systematic approach to decomposition that combines static and dynamic analysis. The three study participants regarded as positive the decomposition proposed by our tool, and considered that it showed improvements over approaches that rely only on static analysis.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.