2017
Autores
Monteiro, J; Morais, C; Carvalhais, M;
Publicação
INTERACTIVE STORYTELLING, ICIDS 2017
Abstract
The possibility to preserve perspectives of reality with spontaneous creations allowed by the web tools that now empower common users with content production skills highlights the numerous opportunities for the present and future of cultural identity maintenance. Our research approaches digital storytelling during intergenerational dynamics as a stage for a participatory contribution to the maintenance of cultural identity. With an ethnographic approach and with partnerships with existing senior movements, we seek to (a) understand the storytelling processes during intergenerational dynamics, (b) develop a framework for the participative creation of narratives in the context of intergenerational cultural identity maintenance, (c) support the participatory maintenance of cultural identity through a set of workshops for intergenerational storytelling, (d) understand the challenges and opportunities promoted by digital affinity spaces for the maintenance of cultural identity. Our contribution proposes to develop the understanding of the role that interactive narratives can have in the context of cultural identity maintenance, by developing new usage strategies to enhance cultural mediation through social and ubiquitous storytelling strategies.
2017
Autores
Ribas, L; Rangel, A; Verdicchio, M; Carvalhais, M;
Publicação
JOURNAL OF SCIENCE AND TECHNOLOGY OF THE ARTS
Abstract
2017
Autores
Sousa, M; Mendes, D; Paulo, S; Matela, N; Jorge, J; Lopes, DS;
Publicação
PROCEEDINGS OF THE 2017 ACM SIGCHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI'17)
Abstract
Reading room conditions such as illumination, ambient light, human factors and display luminance, play an important role on how radiologists analyze and interpret images. Indeed, serious diagnostic errors can appear when observing images through everyday monitors. Typically, these occur whenever professionals are ill-positioned with respect to the display or visualize images under improper light and luminance conditions. In this work, we show that virtual reality can assist radiodiagnostics by considerably diminishing or cancel out the effects of unsuitable ambient conditions. Our approach combines immersive head-mounted displays with interactive surfaces to support professional radiologists in analyzing medical images and formulating diagnostics. We evaluated our prototype with two senior medical doctors and four seasoned radiology fellows. Results indicate that our approach constitutes a viable, flexible, portable and cost-efficient option to traditional radiology reading rooms.
2017
Autores
Mendes, D; Medeiros, D; Sousa, M; Cordeiro, E; Ferreira, A; Jorge, JA;
Publicação
Proceedings of the 33rd Spring Conference on Computer Graphics, SCCG 2017, Mikulov, Czech Republic, May 15-17, 2017
Abstract
In Virtual Reality (VR), the action of selecting virtual objects outside arms-reach still poses significant challenges. In this work, after classifying, with a new taxonomy, and analyzing existing solutions, we propose a novel technique to perform out-of-reach selections in VR. It uses natural pointing gestures, a modifiable cone as selection volume, and an iterative progressive refinement strategy. This can be considered a VR implementation of a discrete zoom approach, although we modify users' position instead of the field-of-view. When the cone intersects several objects, users can either activate the refinement process, or trigger a multiple object selection. We compared our technique against two techniques from literature. Our results show that, although not being the fastest, it is a versatile approach due to the lack of errors and uniform completion times. © 2017 Copyright held by the owner/author(s).
2017
Autores
Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Ferreira, A; Raposo, A; Pereira, JM; Jorge, JA;
Publicação
Proceedings of the Interactive Surfaces and Spaces, ISS 2017, Brighton, United Kingdom, October 17 - 20, 2017
Abstract
Context-aware pervasive applications can improve user experiences by tracking people in their surroundings. Such systems use multiple sensors to gather information regarding people and devices. However, when developing novel user experiences, researchers are left to building foundation code to support multiple network-connected sensors, a major hurdle to rapidly developing and testing new ideas. We introduce Creepy Tracker, an open-source toolkit to ease prototyping with multiple commodity depth cameras. It automatically selects the best sensor to follow each person, handling occlusions and maximizing interaction space, while providing full-body tracking in scalable and extensible manners. It also keeps position and orientation of stationary interactive surfaces while offering continuously updated point-cloud user representations combining both depth and color data. Our performance evaluation shows that, although slightly less precise than marker-based optical systems, Creepy Tracker provides reliable multi-joint tracking without any wearable markers or special devices. Furthermore, implemented representative scenarios show that Creepy Tracker is well suited for deploying spatial and context-aware interactive experiences. © 2017 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
2017
Autores
Mendes, D; Sousa, M; Lorena, R; Ferreira, A; Jorge, JA;
Publicação
Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology, VRST 2017, Gothenburg, Sweden, November 8-10, 2017
Abstract
Virtual Reality environments are able to other natural interaction metaphors. However, it is dicult to accurately place virtual objects in the desired position and orientation using gestures in mid-air. Previous research concluded that the separation of degrees-of-freedom (DOF) can lead to beer results, but these benets come with an increase in time when performing complex tasks, due to the additional number of transformations required. In this work, we assess whether custom transformation axes can be used to achieve the accuracy of DOF separation without sacricing completion time. For this, we developed a new manipulation technique, MAiOR, which oers translation and rotation separation, supporting both 3-DOF and 1-DOF manipulations, using personalized axes for the laer. Additionally, it also has direct 6-DOF manipulation for coarse transformations, and scaled object translation for increased placement. We compared MAiOR against an exclusively 6-DOF approach and a widget-based approach with explicit DOF separation. Results show that, contrary to previous research suggestions, single DOF manipulations are not appealing to users. Instead, users favored 3-DOF manipulations above all, while keeping translation and rotation independent. © 2017 Copyright held by the owner/author(s).
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.