Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Daniel Mendes
  • Cargo

    Investigador Sénior
  • Desde

    01 abril 2020
003
Publicações

2024

Incidental graphical perception: How marks and display time influence accuracy

Autores
Moreira, J; Mendes, D; Gonçalves, D;

Publicação
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be perceived at-a-glance, on-the-go, and during short exposure times, but are not seen on demand. Instead, they appear in people's fields of view during an ongoing primary task. They differ from glanceable visualizations because the information is not received on demand, and they differ from ambient visualizations because the information is not continuously embedded in the environment. However, current graphical perception guidelines do not consider situations where information is presented at specific moments during brief exposure times without being the user's primary focus. Therefore, we conducted a crowdsourced user study with 99 participants to understand how accurate people's incidental graphical perception is. Each participant was tested on one of the three conditions: position of dots, length of lines, and angle of lines. We varied the number of elements for each combination and the display time. During the study, participants were asked to perform reproduction tasks, where they had to recreate a previously shown stimulus in each. Our results indicate that incidental graphical perception can be accurate when using position, length, and angles. Furthermore, we argue that incidental visualizations should be designed for low exposure times (between 300 and 1000 ms).

2023

Impact of incidental visualizations on primary tasks

Autores
Moreira, J; Mendes, D; Goncalves, D;

Publicação
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be seen at-a-glance, on-the-go, and during short exposure times. They will always appear side-by-side with an ongoing primary task while providing ancillary information relevant to those tasks. They differ from glanceable visualizations because looking at them is never their major focus, and they differ from ambient visualizations because they are not embedded in the environment, but appear when needed. However, unlike glanceable and ambient visualizations that have been studied in the past, incidental visualizations have yet to be explored in-depth. In particular, it is still not clear what is their impact on the users' performance of primary tasks. Therefore, we conducted an empirical online between-subjects user study where participants had to play a maze game as their primary task. Their goal was to complete several mazes as quickly as possible to maximize their score. This game was chosen to be a cognitively demanding task, bound to be significantly affected if incidental visualizations have a meaningful impact. At the same time, they had to answer a question that appeared while playing, regarding the path followed so far. Then, for half the participants, an incidental visualization was shown for a short period while playing, containing information useful for answering the question. We analyzed various metrics to understand how the maze performance was impacted by the incidental visualization. Additionally, we aimed to understand if working memory would influence how the maze was played and how visualizations were perceived. We concluded that incidental visualizations of the type used in this study do not disrupt people while they played the maze as their primary task. Furthermore, our results strongly suggested that the information conveyed by the visualization improved their performance in answering the question. Finally, working memory had no impact on the participants' results.

2023

MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

Autores
Fidalgo, CG; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Singh, K; Jorge, J;

Publicação
2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR

Abstract
Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as customized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-toface collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.

2023

CIDER: Collaborative Interior Design in Extended Reality

Autores
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;

Publicação
Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, CHItaly 2023, Torino, Italy, September 20-22, 2023

Abstract
Despite significant efforts dedicated to exploring the potential applications of collaborative mixed reality, the focus of the existing works is mostly related to the creation of shared virtual/mixed environments resolving concurrent manipulation issues rather than supporting an effective collaboration strategy for the design procedure. For this reason, we present CIDER, a system for the collaborative editing of 3D augmented scenes allowing two or more users to manipulate the virtual scene elements independently and without unexpected changes. CIDER is based on the use of "layers"encapsulating the state of the environment with private layers that can be edited independently and a global one collaboratively updated with "commit"operations. Using this system, implemented for the HoloLens 2 headsets and supporting multiple users, we performed a user test on a realistic interior design task, evaluating the general usability and comparing two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on the collaborative behavior. © 2023 ACM.

2023

Shape-A-Getti: A haptic device for getting multiple shapes using a simple actuator

Autores
Barbosa, F; Mendes, D; Rodrigues, R;

Publicação
COMPUTERS & GRAPHICS-UK

Abstract
Haptic feedback in Virtual Reality is commonly provided through wearable or grounded devices adapted to specific scenarios and situations. Shape-changing devices allow for the physical representation of different virtual objects but are still a minority, complex, and usually have long transformation times. We present Shape-a-getti, a novel ungrounded, non-wearable, and graspable haptic device that can quickly change between different radially symmetrical shapes. It uses a single actuator to rotate several identical poles distributed along a radius to render the different shapes. The format of the poles defines the possible shapes, and in our prototype, we used one that could render concave, straight, and convex shapes with different radii. We conducted a user evaluation with 21 participants asking them to recognize virtual objects by grasping the Shape-a-getti. Despite having difficulties distinguishing between some objects with very similar shapes, participants could successfully identify virtual objects with different shapes rendered by our device. (c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).

Teses
supervisionadas

2023

Material Changing Haptics for VR

Autor
Henrique Melo Ribeiro

Instituição
UM

2023

Accountability in Immersive Content Creation Platforms

Autor
Luís Guilherme da Costa Castro Neves

Instituição
UM

2023

Immersive and collaborative web-based 3D design review

Autor
Rodrigo Assaf

Instituição
UM

2023

Exploring Pseudo-Haptics for object compliance in VR

Autor
Carlos Daniel Rodrigues Lousada

Instituição
UM

2023

Improving Absolute Inputs for Interactive Surfaces in VR

Autor
Diogo Guimarães do Rosário

Instituição
UM