2013
Autores
Marques, J; Vasconcelos, A; Teixeira, LF;
Publicação
Studies in Health Technology and Informatics
Abstract
This paper describes the design and development of a tablet-based gaming platform targeting the senior population, aiming at improving their overall wellbeing by stimulating their cognitive capabilities and promoting social interaction between players. To achieve these goals, we started by performing a study of the specific characteristics of the senior user as well as what makes a game appealing to the player. Furthermore we investigated why the tablet proves to be an advantageous device to our target audience. Based on the results of our research, we developed a solution that incorporates cognitive and social mechanisms into its games, while performing iterative evaluations together with the final user by adopting a user-centered design methodology. In each design phase, a pre-selected group of senior participants experimented with the game platform and provided feedback to improve its features and usability. Through a series of short-term and a long-term evaluation, the game platform proved to be appealing to its intended users, providing an enjoyable gaming experience.
2018
Autores
de Sousa, P; Esteves, T; Campos, D; Duarte, F; Santos, J; Leao, J; Xavier, J; de Matos, L; Camarneiro, M; Penas, M; Miranda, M; Silva, R; Neves, AJR; Teixeira, L;
Publicação
VIPIMAGE 2017
Abstract
Gesture recognition is very important for Human-Robot Interfaces. In this paper, we present a novel depth based method for gesture recognition to improve the interaction of a service robot autonomous shopping cart, mostly used by reduced mobility people. In the proposed solution, the identification of the user is already implemented by the software present on the robot where a bounding box focusing on the user is extracted. Based on the analysis of the depth histogram, the distance from the user to the robot is calculated and the user is segmented using from the background. Then, a region growing algorithm is applied to delete all other objects in the image. We apply again a threshold technique to the original image, to obtain all the objects in front of the user. Intercepting the threshold based segmentation result with the region growing resulting image, we obtain candidate objects to be arms of the user. By applying a labelling algorithm to obtain each object individually, a Principal Component Analysis is computed to each one to obtain its center and orientation. Using that information, we intercept the silhouette of the arm with a line obtaining the upper point of the interception which indicates the hand position. A Kalman filter is then applied to track the hand and based on state machines to describe gestures (Start, Stop, Pause) we perform gesture recognition. We tested the proposed approach in a real case scenario with different users and we obtained an accuracy around 89,7%.
2018
Autores
Ferreira, MF; Camacho, R; Teixeira, LF;
Publicação
PROCEEDINGS 2018 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM)
Abstract
Cancer is one of the most serious health problems of our time. One approach for automatically classifying tumor samples is to analyze derived molecular information. Previous work by Teixeira et al. compared different methods of Data Oversampling and Feature Reduction, as well as Deep (Stacked) Denoising Autoencoders followed by a shallow layer for classification. In this work, we compare the performance of 6 different types of Autoencoder (AE), combined with two different approaches when training the classification model: (a) fixing the weights, after pretraining an AE, and (b) allowing fine-tuning of the entire network. We also apply two different strategies for embedding the AE into the classification network: (1) by only importing the encoding layers, and (2) by importing the complete AE. Our best result was the combination of unsupervised feature learning through a single-layer Denoising AE, followed by its complete import into the classification network, and subsequent fine-tuning through supervised training, achieving an F1 score of 99.61% +/- 0.54. We conclude that a reconstruction of the input space, combined with a deeper classification network outperforms previous work, without resorting to data augmentation techniques.
2019
Autores
Torto, IR; Fernandes, K; Teixeira, LF;
Publicação
Pattern Recognition and Image Analysis - 9th Iberian Conference, IbPRIA 2019, Madrid, Spain, July 1-4, 2019, Proceedings, Part I
Abstract
Convolutional Neural Networks, as well as other deep learning methods, have shown remarkable performance on tasks like classification and detection. However, these models largely remain black-boxes. With the widespread use of such networks in real-world scenarios and with the growing demand of the right to explanation, especially in highly-regulated areas like medicine and criminal justice, generating accurate predictions is no longer enough. Machine learning models have to be explainable, i.e., understandable to humans, which entails being able to present the reasons behind their decisions. While most of the literature focuses on post-model methods, we propose an in-model CNN architecture, composed by an explainer and a classifier. The model is trained end-to-end, with the classifier taking as input not only images from the dataset but also the explainer’s resulting explanation, thus allowing for the classifier to focus on the relevant areas of such explanation. We also developed a synthetic dataset generation framework, that allows for automatic annotation and creation of easy-to-understand images that do not require the knowledge of an expert to be explained. Promising results were obtained, especially when using L1 regularisation, validating the potential of the proposed architecture and further encouraging research to improve the proposed architecture’s explainability and performance. © 2019, Springer Nature Switzerland AG.
2020
Autores
Ferreira, P; Teixeira, JG; Teixeira, LF;
Publicação
EXPLORING SERVICE SCIENCE (IESS 2020)
Abstract
Services are the backbone of modern economies and are increasingly supported by technology. Meanwhile, there is an accelerated growth of new technologies that are able to learn from themselves, providing more and more relevant results, i.e. Artificial Intelligence (AI). While there have been significant advances on the capabilities of AI, the impacts of this technology on service provision are still unknown. Conceptual research claims that AI offers a way to augment human capabilities or position it as a threat to human jobs. The objective of this study is to better understand the impact of AI on service, namely by understanding current trends in AI, and how they are, and will, impact service provision. To achieve this, a qualitative study, following Grounded Theory methodology was performed, with ten Artificial Intelligence experts selected from industry and academia.
2020
Autores
Lourenco, C; Tjepkema Cloostermans, MC; Teixeira, LF; van Putten, MJAM;
Publicação
XV MEDITERRANEAN CONFERENCE ON MEDICAL AND BIOLOGICAL ENGINEERING AND COMPUTING - MEDICON 2019
Abstract
Interictal Epileptiform Discharge (IED) detection in EEG signals is widely used in the diagnosis of epilepsy. Visual analysis of EEGs by experts remains the gold standard, outperforming current computer algorithms. Deep learning methods can be an automated way to perform this task. We trained a VGG network using 2-s EEG epochs from patients with focal and generalized epilepsy (39 and 40 patients, respectively, 1977 epochs total) and 53 normal controls (110770 epochs). Five-fold cross-validation was performed on the training set. Model performance was assessed on an independent set (734 IEDs from 20 patients with focal and generalized epilepsy and 23040 normal epochs from 14 controls). Network visualization techniques (filter visualization and occlusion) were applied. The VGG yielded an Area Under the ROC Curve (AUC) of 0.96 (95% Confidence Interval (CI) = 0.95 - 0.97). At 99% specificity, the sensitivity was 79% and only one sample was misclassified per two minutes of analyzed EEG. Filter visualization showed that filters from higher level layers display patches of activity indicative of IED detection. Occlusion showed that the model correctly identified IED shapes. We show that deep neural networks can reliably identify IEDs, which may lead to a fundamental shift in clinical EEG analysis.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.