2020
Authors
Mendes, D; Reis, S; Guerreiro, J; Nicolau, H;
Publication
Proc. ACM Hum. Comput. Interact.
Abstract
Interactive tabletops offer unique collaborative features, particularly their size, geometry, orientation and, more importantly, the ability to support multi-user interaction. Although previous efforts were made to make interactive tabletops accessible to blind people, the potential to use them in collaborative activities remains unexplored. In this paper, we present the design and implementation of a multi-user auditory display for interactive tabletops, supporting three feedback modes that vary on how much information about the partners' actions is conveyed. We conducted a user study with ten blind people to assess the effect of feedback modes on workspace awareness and task performance. Furthermore, we analyze the type of awareness information exchanged and the emergent collaboration strategies. Finally, we provide implications for the design of future tabletop collaborative tools for blind users. © 2020 ACM.
2020
Authors
Pereira, T; Moreira, J; Mendes, D; Goncalves, D;
Publication
2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020)
Abstract
An approach to analyzing Streaming Big Data as it comes in while maintaining the proper context of past events is to employ contiguous visualizations with an increasingly aggressive aggregation degree. This allows for the most recent data to be displayed in detail, while older data is shown in an aggregated form according to how long ago it was received. However, the transitions applied between visualizations with different aggregations must not compromise the understandability of the data flow. Particularly, new data should be perceived considering the context established by older data, and the visualizations should not be perceived as independent or unconnected. In this paper, we present the first study on transitions between two contiguous visualizations, focusing on time series data. We developed several animated transitions between a scatter plot, where all data points are individually represented as they arrive, and other visualizations where data is displayed in an aggregated form. We then conducted a user evaluation to assess the most appealing and effective transitions that allow for the best comprehension of the displayed data for each visualization pair.
2020
Authors
Moreira, J; Mendes, D; Goncalves, D;
Publication
PROCEEDINGS OF THE WORKING CONFERENCE ON ADVANCED VISUAL INTERFACES AVI 2020
Abstract
In InfoVis design, visualizations make use of pre-attentive features to highlight visual artifacts and guide users' perception into relevant information during primitive visual tasks. These are supported by visual marks such as dots, lines, and areas. However, research assumes our pre-attentive processing only allows us to detect specific features in charts. We argue that a visualization can be completely perceived pre-attentively and still convey relevant information. In this work, by combining cognitive perception and psychophysics, we executed a user study with six primitive visual tasks to verify if they could be performed pre-attentively. The tasks were to find: horizontal and vertical positions, length and slope of lines, size of areas, and color luminance intensity. Users were presented with very simple visualizations, with one encoded value at a time, allowing us to assess the accuracy and response time. Our results showed that horizontal position identification is the most accurate and fastest task to do, and the color luminance intensity identification task is the worst. We believe our study is the first step into a fresh field called Incidental Visualizations, where visualizations are meant to be seen at-a-glance, and with little effort.
2021
Authors
Cassola, F; Pinto, M; Mendes, D; Morgado, L; Coelho, A; Paredes, H;
Publication
2021 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2021)
Abstract
Training in VR can reduce risks and costs while allowing frequent and diversified experiential learning activities. We present a novel VR immersive authoring tool for experiential learning courses with industrial machinery. A trainer can create a course from scratch, defining all its components (structure, models, tools, and settings). The actions which trainees should perform can be specified by demonstration. After completing the course, trainees' actions will be matched against the trainer's.
2021
Authors
Roberto Zorzal, E; Sousa, M; Mendes, D; Figueiredo Paulo, S; Rodrigues, P; Jorge, J; Lopes, DS;
Publication
Human–Computer Interaction Series - Digital Anatomy
Abstract
2021
Authors
Cassola, F; Pinto, M; Mendes, D; Morgado, L; Coelho, A; Paredes, H;
Publication
2021 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2021)
Abstract
The use of VR in industrial training contributes to reduce costs and risks, supporting more frequent and diversified use of experiential learning activities, an approach with proven results. In this work, we present an innovative immersive authoring tool for experiential learning in VR-based training. It enables a trainer to structure an entire VR training course in an immersive environment, defining its sub-components, models, tools, and settings, as well as specifying by demonstration the actions to be performed by trainees. The trainees performing the immersive training course have their actions recorded and matched to the ones specified by the trainer.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.