Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2019

Anatomy Studio: A tool for virtual dissection through augmented 3D reconstruction

Authors
Zorzal, ER; Sousa, M; Mendes, D; dos Anjos, RK; Medeiros, D; Paulo, SF; Rodrigues, P; Mendes, JJ; Delmas, V; Uhl, JF; Mogorron, J; Jorge, JA; Lopes, DS;

Publication
COMPUTERS & GRAPHICS-UK

Abstract
3D reconstruction from anatomical slices allows anatomists to create three dimensional depictions of real structures by tracing organs from sequences of cryosections. However, conventional user interfaces rely on single-user experiences and mouse-based input to create content for education or training purposes. In this work, we present Anatomy Studio, a collaborative Mixed Reality tool for virtual dissection that combines tablets with styli and see-through head-mounted displays to assist anatomists by easing manual tracing and exploring cryosection images. We contribute novel interaction techniques intended to promote spatial understanding and expedite manual segmentation. By using mid-air interactions and interactive surfaces, anatomists can easily access any cryosection and edit contours, while following other user's contributions. A user study including experienced anatomists and medical professionals, conducted in real working sessions, demonstrates that Anatomy Studio is appropriate and useful for 3D reconstruction. Results indicate that Anatomy Studio encourages closely-coupled collaborations and group discussion, to achieve deeper insights.

2019

A Survey on 3D Virtual Object Manipulation: From the Desktop to Immersive Virtual Environments

Authors
Mendes, D; Caputo, FM; Giachetti, A; Ferreira, A; Jorge, J;

Publication
COMPUTER GRAPHICS FORUM

Abstract
Interactions within virtual environments often require manipulating 3D virtual objects. To this end, researchers have endeavoured to find efficient solutions using either traditional input devices or focusing on different input modalities, such as touch and mid-air gestures. Different virtual environments and diverse input modalities present specific issues to control object position, orientation and scaling: traditional mouse input, for example, presents non-trivial challenges because of the need to map between 2D input and 3D actions. While interactive surfaces enable more natural approaches, they still require smart mappings. Mid-air gestures can be exploited to offer natural manipulations mimicking interactions with physical objects. However, these approaches often lack precision and control. All these issues and many others have been addressed in a large body of work. In this article, we survey the state-of-the-art in 3D object manipulation, ranging from traditional desktop approaches to touch and mid-air interfaces, to interact in diverse virtual environments. We propose a new taxonomy to better classify manipulation properties. Using our taxonomy, we discuss the techniques presented in the surveyed literature, highlighting trends, guidelines and open challenges, that can be useful both to future research and to developers of 3D user interfaces.

2019

Safe Walking In VR using Augmented Virtuality

Authors
Sousa, M; Mendes, D; Jorge, JA;

Publication
CoRR

Abstract

2019

ISVC - Digital Platform for Detection and Prevention of Computer Vision Syndrome

Authors
Vieira, F; Oliveira, E; Rodrigues, N;

Publication
2019 IEEE 7th International Conference on Serious Games and Applications for Health, SeGAH 2019

Abstract
This paper describes the research, development and evaluation process of a solution based on computer vision for the detection and prevention of Computer Vision Syndrome, a type of eye fatigue characterized by the appearance of ocular symptoms during or after prolonged periods watching digital screens. The system developed targets users of computers and mobile devices, detecting and warning users to the occurrence of eye fatigue situations and suggesting corrective behaviours in order to prevent more complicated health consequences. The implementation resorts to machine learning techniques, using eye images datasets for training the eye state detection algorithm. OpenCV Lib was used for eye's segmentation and subsequent fatigue analysis. The final goal of the system is to provide users and health professionals with quality data analysis of eye fatigue levels, in order to raise awareness over accumulated stress and promote behaviour change. © 2019 IEEE.

2019

Top-Down Human Pose Estimation with Depth Images and Domain Adaptation

Authors
Rodrigues, N; Torres, H; Oliveira, B; Borges, J; Queiros, S; Mendes, J; Fonseca, J; Coelho, V; Brito, JH;

Publication
PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5

Abstract
In this paper, a method for estimation of human pose is proposed, making use of ToF (Time of Flight) cameras. For this, a YOLO based object detection method was used, to develop a top-down method. In the first stage, a network was developed to detect people in the image. In the second stage, a network was developed to estimate the joints of each person, using the image result from the first stage. We show that a deep learning network trained from scratch with ToF images yields better results than taking a deep neural network pretrained on RGB data and retraining it with ToF data. We also show that a top-down detector, with a person detector and a joint detector works better than detecting the body joints over the entire image.

2019

Automatic left ventricular segmentation in 4D interventional ultrasound data using a patient-specific temporal synchronized shape prior

Authors
Morais, P; Queiros, S; Pereira, C; Moreira, AHJ; Baptista, MJ; Rodrigues, NF; D'hooge, J; Barbosa, D; Vilaca, JL;

Publication
MEDICAL IMAGING 2019: IMAGE PROCESSING

Abstract
The fusion of pre-operative 3D magnetic resonance (MR) images with real-time 3D ultrasound (US) images can be the most beneficial way to guide minimally invasive cardiovascular interventions without radiation. Previously, we addressed this topic through a strategy to segment the left ventricle (LV) on interventional 3D US data using a personalized shape prior obtained from a pre-operative MR scan. Nevertheless, this approach was semi-automatic, requiring a manual alignment between US and MR image coordinate systems. In this paper, we present a novel solution to automate the abovementioned pipeline. In this sense, a method to automatically detect the right ventricular (RV) insertion point on the US data was developed, which is subsequently combined with pre-operative annotations of the RV position in the MR volume, therefore allowing an automatic alignment of their coordinate systems. Moreover, a novel strategy to ensure a correct temporal synchronization of the US and MR models is applied. Finally, a full evaluation of the proposed automatic pipeline is performed. The proposed automatic framework was tested in a clinical database with 24 patients containing both MR and US scans. A similar performance between the proposed and the previous semi-automatic version was found in terms of relevant clinical measurements. Additionally, the automatic strategy to detect the RV insertion point showed its effectiveness, with a good agreement against manually identified landmarks. Overall, the proposed automatic method showed high feasibility and a performance similar to the semi-automatic version, reinforcing its potential for normal clinical routine.

  • 206
  • 589