Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
About

About

Daniel Mendes is an Assistant Professor at the Faculty of Engineering of the University of Porto, Portugal, and a researcher at INESC TEC. He received his Ph.D. (2018), MSc (2011), and BSc (2008) degrees in Computer Science and Engineering from Instituto Superior Técnico, University of Lisbon. His main interest areas are Human-Computer Interaction, 3D User Interfaces, Virtual and Augmented Reality, Multimodal Interfaces, and Touch/Gesture-based Interactions. He has been involved in several national research projects funded by the Portuguese Foundation for Science and Technology (FCT). He co-authored over 60 papers published in peer-reviewed scientific journals, conferences, and meetings. He is a member of ACM, IEEE, Eurographics, and the Portuguese Group for Computer Graphics.

Interest
Topics
Details

Details

  • Name

    Daniel Mendes
  • Role

    Senior Researcher
  • Since

    01st April 2020
003
Publications

2024

Incidental graphical perception: How marks and display time influence accuracy

Authors
Moreira, J; Mendes, D; Gonçalves, D;

Publication
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be perceived at-a-glance, on-the-go, and during short exposure times, but are not seen on demand. Instead, they appear in people's fields of view during an ongoing primary task. They differ from glanceable visualizations because the information is not received on demand, and they differ from ambient visualizations because the information is not continuously embedded in the environment. However, current graphical perception guidelines do not consider situations where information is presented at specific moments during brief exposure times without being the user's primary focus. Therefore, we conducted a crowdsourced user study with 99 participants to understand how accurate people's incidental graphical perception is. Each participant was tested on one of the three conditions: position of dots, length of lines, and angle of lines. We varied the number of elements for each combination and the display time. During the study, participants were asked to perform reproduction tasks, where they had to recreate a previously shown stimulus in each. Our results indicate that incidental graphical perception can be accurate when using position, length, and angles. Furthermore, we argue that incidental visualizations should be designed for low exposure times (between 300 and 1000 ms).

2024

Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems

Authors
Assaf, R; Mendes, D; Rodrigues, R;

Publication
COMPUTER GRAPHICS FORUM

Abstract
Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.

2023

Impact of incidental visualizations on primary tasks

Authors
Moreira, J; Mendes, D; Goncalves, D;

Publication
INFORMATION VISUALIZATION

Abstract
Incidental visualizations are meant to be seen at-a-glance, on-the-go, and during short exposure times. They will always appear side-by-side with an ongoing primary task while providing ancillary information relevant to those tasks. They differ from glanceable visualizations because looking at them is never their major focus, and they differ from ambient visualizations because they are not embedded in the environment, but appear when needed. However, unlike glanceable and ambient visualizations that have been studied in the past, incidental visualizations have yet to be explored in-depth. In particular, it is still not clear what is their impact on the users' performance of primary tasks. Therefore, we conducted an empirical online between-subjects user study where participants had to play a maze game as their primary task. Their goal was to complete several mazes as quickly as possible to maximize their score. This game was chosen to be a cognitively demanding task, bound to be significantly affected if incidental visualizations have a meaningful impact. At the same time, they had to answer a question that appeared while playing, regarding the path followed so far. Then, for half the participants, an incidental visualization was shown for a short period while playing, containing information useful for answering the question. We analyzed various metrics to understand how the maze performance was impacted by the incidental visualization. Additionally, we aimed to understand if working memory would influence how the maze was played and how visualizations were perceived. We concluded that incidental visualizations of the type used in this study do not disrupt people while they played the maze as their primary task. Furthermore, our results strongly suggested that the information conveyed by the visualization improved their performance in answering the question. Finally, working memory had no impact on the participants' results.

2023

MAGIC: Manipulating Avatars and Gestures to Improve Remote Collaboration

Authors
Fidalgo, CG; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Singh, K; Jorge, J;

Publication
2023 IEEE CONFERENCE VIRTUAL REALITY AND 3D USER INTERFACES, VR

Abstract
Remote collaborative work has become pervasive in many settings, ranging from engineering to medical professions. Users are immersed in virtual environments and communicate through life-sized avatars that enable face-to-face collaboration. Within this context, users often collaboratively view and interact with virtual 3D models, for example to assist in the design of new devices such as customized prosthetics, vehicles or buildings. Discussing such shared 3D content face-to-face, however, has a variety of challenges such as ambiguities, occlusions, and different viewpoints that all decrease mutual awareness, which in turn leads to decreased task performance and increased errors. To address this challenge, we introduce MAGIC, a novel approach for understanding pointing gestures in a face-to-face shared 3D space, improving mutual understanding and awareness. Our approach distorts the remote user's gestures to correctly reflect them in the local user's reference space when face-to-face. To measure what two users perceive in common when using pointing gestures in a shared 3D space, we introduce a novel metric called pointing agreement. Results from a user study suggest that MAGIC significantly improves pointing agreement in face-toface collaboration settings, improving co-presence and awareness of interactions performed in the shared space. We believe that MAGIC improves remote collaboration by enabling simpler communication mechanisms and better mutual awareness.

2023

CIDER: Collaborative Interior Design in Extended Reality

Authors
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;

Publication
Proceedings of the 15th Biannual Conference of the Italian SIGCHI Chapter, CHItaly 2023, Torino, Italy, September 20-22, 2023

Abstract
Despite significant efforts dedicated to exploring the potential applications of collaborative mixed reality, the focus of the existing works is mostly related to the creation of shared virtual/mixed environments resolving concurrent manipulation issues rather than supporting an effective collaboration strategy for the design procedure. For this reason, we present CIDER, a system for the collaborative editing of 3D augmented scenes allowing two or more users to manipulate the virtual scene elements independently and without unexpected changes. CIDER is based on the use of "layers"encapsulating the state of the environment with private layers that can be edited independently and a global one collaboratively updated with "commit"operations. Using this system, implemented for the HoloLens 2 headsets and supporting multiple users, we performed a user test on a realistic interior design task, evaluating the general usability and comparing two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on the collaborative behavior. © 2023 ACM.

Supervised
thesis

2023

Object Manipulation in Desk VR

Author
Diogo Henrique Pinto de Almeida

Institution
UM

2023

Material Changing Haptics for VR

Author
Henrique Melo Ribeiro

Institution
UM

2023

Accountability in Immersive Content Creation Platforms

Author
Luís Guilherme da Costa Castro Neves

Institution
UM

2023

Immersive and collaborative web-based 3D design review

Author
Rodrigo Assaf

Institution
UM

2023

Exploring Pseudo-Haptics for object compliance in VR

Author
Carlos Daniel Rodrigues Lousada

Institution
UM