2019
Authors
Pires, G; Mendes, D; Goncalves, D;
Publication
PROCEEDINGS OF THE 2019 INTERNATIONAL CONFERENCE ON GRAPHICS AND INTERACTION (ICGI 2019)
Abstract
The rapid increase of connected devices causes more and more data to be generated and, in some cases, this data needs to be analyzed as it is received. As such, the challenge of presenting streaming data in such way that changes in the regular flow can be detected needs to be tackled, so that timely and informed decisions can be made. This requires users to be able to perceive the information being received in the moment in detail, while maintaining the context. In this paper, we propose VisMillion, a visualization technique for large amounts of streaming data, following the concept of graceful degradation. It is comprised of several different modules positioned side by side, corresponding to different contiguous time spans, from the last few seconds to a historical view of all data received in the stream so far. Data flows through each one from right to left and, the more recent the data, the more detailed it is presented. To this end, each module uses a different technique to aggregate and process information, with special care to ensure visual continuity between modules to facilitate the analysis. VisMillion was validated through a usability evaluation with 21 participants, as well as performance tests. Results show that it fulfills its objective, successfully aiding users to detect changes, patterns and anomalies in the information being received.
2015
Authors
Sousa, M; Mendes, D; Ferreira, A; Pereira, JM; Jorge, J;
Publication
HUMAN-COMPUTER INTERACTION - INTERACT 2015, PT III
Abstract
Virtual meetings have become increasingly common with modern video-conference and collaborative software. While they allow obvious savings in time and resources, current technologies add unproductive layers of protocol to the flow of communication between participants, rendering the interactions far from seamless. In this work we introduce Remote Proxemics, an extension of proxemics aimed at bringing the syntax of co-located proximal interactions to virtual meetings. We propose Eery Space, a shared virtual locus that results from merging multiple remote areas, where meeting participants' are located side-by-side as if they shared the same physical location. Eery Space promotes collaborative content creation and seamless mediation of communication channels based on virtual proximity. Results from user evaluation suggest that our approach is sufficient to initiate proximal exchanges regardless of their geolocation, while promoting smooth interactions between local and remote people alike.
2014
Authors
Henriques, D; Trancoso, I; Mendes, D; Ferreira, A;
Publication
INTERSPEECH 2014, 15th Annual Conference of the International Speech Communication Association, Singapore, September 14-18, 2014
Abstract
Query specification for 3D object retrieval still relies on traditional interaction paradigms. The goal of our study was to identify the most natural methods to describe 3D objects, focusing on verbal and gestural expressions. Our case study uses LEGOR blocks. We started by collecting a corpus involving ten pairs of subjects, in which one participant requests blocks for building a model from another participant. This small corpus suggests that users prefer to describe 3D objects verbally, rarely resorting to gestures, and using them only as complement. The paper describes this corpus, addressing the challenges that such verbal descriptions create for a speech understanding system, namely the long complex verbal descriptions, involving dimensions, shapes, colors, metaphors, and diminutives. The latter connote small size, endearment or insignificance, and are only very common in informal language. In this corpus, they occurred in one out of seven requests. This experiment was the first step of the development of a prototype for searching LEGOR blocks combining speech and stereoscopic 3D. Although the verbal interaction in the first version is limited to relatively simple queries, its combination with immersive visualization allows the user to explore query results in a dataset with virtual blocks.
2017
Authors
Mendes, D; Medeiros, D; Sousa, M; Cordeiro, E; Ferreira, A; Jorge, JA;
Publication
Proceedings of the 33rd Spring Conference on Computer Graphics, SCCG 2017, Mikulov, Czech Republic, May 15-17, 2017
Abstract
In Virtual Reality (VR), the action of selecting virtual objects outside arms-reach still poses significant challenges. In this work, after classifying, with a new taxonomy, and analyzing existing solutions, we propose a novel technique to perform out-of-reach selections in VR. It uses natural pointing gestures, a modifiable cone as selection volume, and an iterative progressive refinement strategy. This can be considered a VR implementation of a discrete zoom approach, although we modify users' position instead of the field-of-view. When the cone intersects several objects, users can either activate the refinement process, or trigger a multiple object selection. We compared our technique against two techniques from literature. Our results show that, although not being the fastest, it is a versatile approach due to the lack of errors and uniform completion times. © 2017 Copyright held by the owner/author(s).
2017
Authors
Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Ferreira, A; Raposo, A; Pereira, JM; Jorge, JA;
Publication
Proceedings of the Interactive Surfaces and Spaces, ISS 2017, Brighton, United Kingdom, October 17 - 20, 2017
Abstract
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences. © 2017 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
2019
Authors
dos Anjos, RK; Sousa, M; Mendes, D; Medeiros, D; Billinghurst, M; Anslow, C; Jorge, J;
Publication
25TH ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY (VRST 2019)
Abstract
Modern volumetric projection-based telepresence approaches are capable of providing realistic full-size virtual representations of remote people. Interacting with full-size people may not be desirable due to the spatial constraints of the physical environment, application context, or display technology. However, the miniaturization of remote people is known to create an eye gaze matching problem. Eye-contact is essential to communication as it allows for people to use natural nonverbal cues and improves the sense of "being there". In this paper we discuss the design space for interacting with volumetric representations of people and present an approach for dynamically manipulating scale, orientation and the position of holograms which guarantees eye-contact. We created a working augmented reality-based prototype and validated it with 14 participants.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.