Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por João Paulo Cunha

2020

Virtual reality in training: an experimental study with firefighters

Autores
Narciso, D; Melo, M; Raposo, JV; Cunha, J; Bessa, M;

Publicação
MULTIMEDIA TOOLS AND APPLICATIONS

Abstract
Training with Virtual Reality (VR) can bring several benefits, such as the reduction of costs and risks. We present an experimental study that aims to evaluate the effectiveness of a Virtual Environment (VE) to train firefighters using an innovative approach based on a Real Environment (RE) exercise. To measure the VE's effectiveness we used a Presence Questionnaire (PQ) and participant's cybersickness, stress and fatigue. Results from the PQ showed that participants rated the VE with high spatial presence and moderate realness and immersion. Signs of stress, analyzed from participant's Heart-Rate Variability, were shown in the RE but not in the VE. In the remaining variables, there was only an indicative difference for fatigue in the RE. Therefore, the results suggest that although our training VE was successful in giving participants spatial presence and in not causing cybersickness, its realness and immersion provided were not enough to provoke a similar RE response.

2020

Subject Identification Based on Gait Using a RGB-D Camera

Autores
Rocha, AP; Fernandes, JM; Choupina, HMP; Vilas Boas, MC; Cunha, JPS;

Publicação
Advances in Intelligent Systems and Computing

Abstract
Biometric authentication (i.e., verification of a given subject’s identity using biological characteristics) relying on gait characteristics obtained in a non-intrusive way can be very useful in the area of security, for smart surveillance and access control. In this contribution, we investigated the possibility of carrying out subject identification based on a predictive model built using machine learning techniques, and features extracted from 3-D body joint data provided by a single low-cost RGB-D camera (Microsoft Kinect v2). We obtained a dataset including 400 gait cycles from 20 healthy subjects, and 25 anthropometric measures and gait parameters per gait cycle. Different machine learning algorithms were explored: k-nearest neighbors, decision tree, random forest, support vector machines, multilayer perceptron, and multilayer perceptron ensemble. The algorithm that led to the model with best trade-off between the considered evaluation metrics was the random forest: overall accuracy of 99%, class accuracy of 100±Â0%, and F 1 score of 99±Â2%. These results show the potential of using a RGB-D camera for subject identification based on quantitative gait analysis. © 2020, Springer Nature Switzerland AG.

2020

iHandU: A Novel Quantitative Wrist Rigidity Evaluation Device for Deep Brain Stimulation Surgery

Autores
Murias Lopes, E; Vilas Boas, MD; Dias, D; Rosas, MJ; Vaz, R; Silva Cunha, JP;

Publicação
SENSORS

Abstract
Deep brain stimulation (DBS) surgery is the gold standard therapeutic intervention in Parkinson's disease (PD) with motor complications, notwithstanding drug therapy. In the intraoperative evaluation of DBS's efficacy, neurologists impose a passive wrist flexion movement and qualitatively describe the perceived decrease in rigidity under different stimulation parameters and electrode positions. To tackle this subjectivity, we designed a wearable device to quantitatively evaluate the wrist rigidity changes during the neurosurgery procedure, supporting physicians in decision-making when setting the stimulation parameters and reducing surgery time. This system comprises a gyroscope sensor embedded in a textile band for patient's hand, communicating to a smartphone via Bluetooth and has been evaluated on three datasets, showing an average accuracy of 80%. In this work, we present a system that has seen four iterations since 2015, improving on accuracy, usability and reliability. We aim to review the work done so far, outlining the iHandU system evolution, as well as the main challenges, lessons learned, and future steps to improve it. We also introduce the last version (iHandU 4.0), currently used in DBS surgeries at SAo JoAo Hospital in Portugal.

2020

iLoF: An intelligent Lab on Fiber Approach for Human Cancer Single-Cell Type Identification

Autores
Paiva, JS; Jorge, PAS; Ribeiro, RSR; Balmana, M; Campos, D; Mereiter, S; Jin, CS; Karlsson, NG; Sampaio, P; Reis, CA; Cunha, JPS;

Publicação
SCIENTIFIC REPORTS

Abstract
With the advent of personalized medicine, there is a movement to develop "smaller" and "smarter" microdevices that are able to distinguish similar cancer subtypes. Tumor cells display major differences when compared to their natural counterparts, due to alterations in fundamental cellular processes such as glycosylation. Glycans are involved in tumor cell biology and they have been considered to be suitable cancer biomarkers. Thus, more selective cancer screening assays can be developed through the detection of specific altered glycans on the surface of circulating cancer cells. Currently, this is only possible through time-consuming assays. In this work, we propose the "intelligent" Lab on Fiber (iLoF) device, that has a high-resolution, and which is a fast and portable method for tumor single-cell type identification and isolation. We apply an Artificial Intelligence approach to the back-scattered signal arising from a trapped cell by a micro-lensed optical fiber. As a proof of concept, we show that iLoF is able to discriminate two human cancer cell models sharing the same genetic background but displaying a different surface glycosylation profile with an accuracy above 90% and a speed rate of 2.3 seconds. We envision the incorporation of the iLoF in an easy-to-operate microchip for cancer identification, which would allow further biological characterization of the captured circulating live cells.

2020

A DEEP LEARNING ARCHITECTURE FOR EPILEPTIC SEIZURE CLASSIFICATION BASED ON OBJECT AND ACTION RECOGNITION

Autores
Karacsony, T; Loesch Biffar, AM; Vollmar, C; Noachtar, S; Cunha, JPS;

Publicação
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING

Abstract
Epilepsy affects approximately 1% of the world's population. Semiology of epileptic seizures contain major clinical signs to classify epilepsy syndromes currently evaluated by epileptologists by simple visual inspection of video. There is a necessity to create automatic and semiautomatic methods for seizure detection and classification to better support patient monitoring management and diagnostic decisions. One of the current promising approaches are the marker-less computer-vision techniques. In this paper an end-to-end deep learning approach is proposed for binary classification of Frontal vs. Temporal Lobe Epilepsies based solely on seizure videos. The system utilizes infrared (IR) videos of the seizures as it is used 24/7 in hospitals' epilepsy monitoring units. The architecture employs transfer learning from large object detection "static" and human action recognition "dynamic" datasets such as ImageNet and Kinectics-400, to extract and classify the clinically known spatiotemporal features of seizures. The developed classification architecture achieves a 5-fold cross-validation f1-score of 0.844 +/- 0.042. This architecture has the potential to support physicians with diagnostic decisions and might be applied for online applications in epilepsy monitoring units. Furthermore, it may be jointly used in the near future with synchronized scene depth 3D information and EEG from the seizures.

2017

Automated volumetry of hippocampus is useful to confirm unilateral mesial temporal sclerosis in patients with radiologically positive findings

Autores
Silva, G; Martins, C; Moreira da Silva, N; Vieira, D; Costa, D; Rego, R; Fonseca, J; Silva Cunha, JP;

Publicação
Neuroradiology Journal

Abstract
Background and purpose We evaluated two methods to identify mesial temporal sclerosis (MTS): visual inspection by experienced epilepsy neuroradiologists based on structural magnetic resonance imaging sequences and automated hippocampal volumetry provided by a processing pipeline based on the FMRIB Software Library. Methods This retrospective study included patients from the epilepsy monitoring unit database of our institution. All patients underwent brain magnetic resonance imaging in 1.5T and 3T scanners with protocols that included thin coronal T2, T1 and fluid-attenuated inversion recovery and isometric T1 acquisitions. Two neuroradiologists with experience in epilepsy and blinded to clinical data evaluated magnetic resonance images for the diagnosis of MTS. The diagnosis of MTS based on an automated method included the calculation of a volumetric asymmetry index between the two hippocampi of each patient and a threshold value to define the presence of MTS obtained through statistical tests (receiver operating characteristics curve). Hippocampi were segmented for volumetric quantification using the FIRST tool and fslstats from the FMRIB Software Library. Results The final cohort included 19 patients with unilateral MTS (14 left side): 14 women and a mean age of 43.4 ± 10.4 years. Neuroradiologists had a sensitivity of 100% and specificity of 73.3% to detect MTS (gold standard, k = 0.755). Automated hippocampal volumetry had a sensitivity of 84.2% and specificity of 86.7% (k = 0.704). Combined, these methods had a sensitivity of 84.2% and a specificity of 100% (k = 0.825). Conclusions Automated volumetry of the hippocampus could play an important role in temporal lobe epilepsy evaluation, namely on confirmation of unilateral MTS diagnosis in patients with radiological suggestive findings. © SAGE Publications.

  • 13
  • 38