Detalhes
Nome
Daniel MendesCargo
Investigador SéniorDesde
01 abril 2020
Nacionalidade
PortugalCentro
Computação Centrada no Humano e Ciência da InformaçãoContactos
+351222094000
daniel.mendes@inesctec.pt
2025
Autores
Pintani, D; Caputo, A; Mendes, D; Giachetti, A;
Publicação
BEHAVIOUR & INFORMATION TECHNOLOGY
Abstract
We present CIDER, a novel framework for the collaborative editing of 3D augmented scenes. The framework allows multiple users to manipulate the virtual elements added to the real environment independently and without unexpected changes, comparing the different editing proposals and finalising a collaborative result. CIDER leverages the use of 'layers' encapsulating the state of the environment. Private layers can be edited independently by the different subjects, and a global one can be collaboratively updated with 'commit' operations. In this paper, we describe in detail the system architecture and the implementation as a prototype for the HoloLens 2 headsets, as well as the motivations behind the interaction design. The system has been validated with a user study on a realistic interior design task. The study not only evaluated the general usability but also compared two different approaches for the management of the atomic commit: forced (single-phase) and voting (requiring consensus), analyzing the effects of this choice on collaborative behaviour. According to the users' comments, we performed improvements to the interface and further tested their effectiveness.
2024
Autores
Moreira, J; Mendes, D; Gonçalves, D;
Publicação
INFORMATION VISUALIZATION
Abstract
Incidental visualizations are meant to be perceived at-a-glance, on-the-go, and during short exposure times, but are not seen on demand. Instead, they appear in people's fields of view during an ongoing primary task. They differ from glanceable visualizations because the information is not received on demand, and they differ from ambient visualizations because the information is not continuously embedded in the environment. However, current graphical perception guidelines do not consider situations where information is presented at specific moments during brief exposure times without being the user's primary focus. Therefore, we conducted a crowdsourced user study with 99 participants to understand how accurate people's incidental graphical perception is. Each participant was tested on one of the three conditions: position of dots, length of lines, and angle of lines. We varied the number of elements for each combination and the display time. During the study, participants were asked to perform reproduction tasks, where they had to recreate a previously shown stimulus in each. Our results indicate that incidental graphical perception can be accurate when using position, length, and angles. Furthermore, we argue that incidental visualizations should be designed for low exposure times (between 300 and 1000 ms).
2024
Autores
Assaf, R; Mendes, D; Rodrigues, R;
Publicação
COMPUTER GRAPHICS FORUM
Abstract
Collaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.
2024
Autores
Moreira, J; Mendes, D; Gonçalves, D;
Publicação
VISUAL INFORMATICS
Abstract
Incidental visualizations convey information to a person during an ongoing primary task, without the person consciously searching for or requesting that information. They differ from glanceable visualizations by not being people's main focus, and from ambient visualizations by not being embedded in the environment. Instead, they are presented as secondary information that can be observed without a person losing focus on their current task. However, despite extensive research on glanceable and ambient visualizations, the topic of incidental visualizations is yet a novel topic in current research. To bridge this gap, we conducted an empirical user study presenting participants with an incidental visualization while performing a primary task. We aimed to understand how complexity contributory factors - task complexity, output complexity, and pressure - affected primary task performance and incidental visualization accuracy. Our findings showed that incidental visualizations effectively conveyed information without disrupting the primary task, but working memory limitations should be considered. Additionally, output and pressure significantly influenced the primary task's results. In conclusion, our study provides insights into the perception accuracy and performance impact of incidental visualizations in relation to complexity factors. (c) 2024 The Authors. Published by Elsevier B.V. on behalf of Zhejiang University and Zhejiang University Press Co. Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
2024
Autores
Fonseca, F; Sousa, M; Mendes, D; Ferreira, A; Jorge, JA;
Publicação
CoRR
Abstract
Teses supervisionadas
2023
Autor
Carlos Daniel Rodrigues Lousada
Instituição
UM
2023
Autor
Diogo Guimarães do Rosário
Instituição
UM
2023
Autor
Diogo Henrique Pinto de Almeida
Instituição
UM
2023
Autor
Henrique Melo Ribeiro
Instituição
UM
2023
Autor
Luís Guilherme da Costa Castro Neves
Instituição
UM
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.