Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by Vitor Manuel Filipe

2019

Learning Computer Vision using a Humanoid Robot

Authors
Vital, JPM; Fonseca Ferreira, NMF; Valente, A; Filipe, V; Soares, SFSP;

Publication
PROCEEDINGS OF 2019 IEEE GLOBAL ENGINEERING EDUCATION CONFERENCE (EDUCON)

Abstract
This paper presents an innovative and motivating methodology to learn vision systems using a humanoid robot, NAO robot. Vision systems are an area of growing development and interest of engineering students. This approach to learning was applied in students of Master of Electrical Engineering. The goal is to introduce students the main approaches of visual object recognition and human face recognition using computer vision techniques to be embedded in a social robot and therefore he is able to iteract with human beings. NAO robot as an educational platform easy to learn how to program, and it has a high sensory ability and two cameras that can capture the images for processing.

2019

A review of assistive spatial orientation and navigation technologies for the visually impaired

Authors
Fernandes, H; Costa, P; Filipe, V; Paredes, H; Barroso, J;

Publication
UNIVERSAL ACCESS IN THE INFORMATION SOCIETY

Abstract
The overall objective of this work is to review the assistive technologies that have been proposed by researchers in recent years to address the limitations in user mobility posed by visual impairment. This work presents an umbrella review. Visually impaired people often want more than just information about their location and often need to relate their current location to the features existing in the surrounding environment. Extensive research has been dedicated into building assistive systems. Assistive systems for human navigation, in general, aim to allow their users to safely and efficiently navigate in unfamiliar environments by dynamically planning the path based on the user's location, respecting the constraints posed by their special needs. Modern mobile assistive technologies are becoming more discrete and include a wide range of mobile computerized devices, including ubiquitous technologies such as mobile phones. Technology can be used to determine the user's location, his relation to the surroundings (context), generate navigation instructions and deliver all this information to the blind user.

2019

Student concentration evaluation index in an E-learning context using facial emotion analysis

Authors
Sharma, P; Esengönül, M; Khanal, SR; Khanal, TT; Filipe, V; Reis, MJCS;

Publication
Communications in Computer and Information Science

Abstract
Analysis of student concentration can help to enhance the learning process. Emotions are directly related and directly reflect students’ concentration. This task is particularly difficult to implement in an e-learning environment, where the student stands alone in front of a computer. In this paper, a prototype system is proposed to figure out the concentration level in real-time from the expressed facial emotions during a lesson. An experiment was performed to evaluate the prototype system that was implemented using a client-side application that uses the C# code available in Microsoft Azure Emotion API. We have found that the emotions expressed are correlated with the concentration of the students, and devised three distinct levels of concentration (high, medium, and low). © Springer Nature Switzerland AG 2019.

2019

Classification of Physical Exercise Intensity Based on Facial Expression Using Deep Neural Network

Authors
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publication
Universal Access in Human-Computer Interaction. Multimodality and Assistive Environments - 13th International Conference, UAHCI 2019, Held as Part of the 21st HCI International Conference, HCII 2019, Orlando, FL, USA, July 26-31, 2019, Proceedings, Part II

Abstract
If done properly, physical exercise can help maintain fitness and health. The benefits of physical exercise could be increased with real time monitoring by measuring physical exercise intensity, which refers to how hard it is for a person to perform a specific task. This parameter can be estimated using various sensors, including contactless technology. Physical exercise intensity is usually synchronous to heart rate; therefore, if we measure heart rate, we can define a particular level of physical exercise. In this paper, we proposed a Convolutional Neural Network (CNN) to classify physical exercise intensity based on the analysis of facial images extracted from a video collected during sub-maximal exercises in a stationary bicycle, according to standard protocol. The time slots of the video used to extract the frames were determined by heart rate. We tested different CNN models using as input parameters the individual color components and grayscale images. The experiments were carried out separately with various numbers of classes. The ground truth level for each class was defined by the heart rate. The dataset was prepared to classify the physical exercise intensity into two, three, and four classes. For each color model a CNN was trained and tested. The model performance was presented using confusion matrix as metrics for each case. The most significant color channel in terms of accuracy was Green. The average model accuracy was 100%, 99% and 96%, for two, three and four classes classification, respectively. © 2019, Springer Nature Switzerland AG.

2019

A Low-Cost System to Estimate Leaf Area Index Combining Stereo Images and Normalized Difference Vegetation Index

Authors
Mendes, JM; Filipe, VM; dos Santos, FN; dos Santos, RM;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I

Abstract
In order to determine the physiological state of a plant it is necessary to monitor it throughout the developmental period. One of the main parameters to monitor is the Leaf Area Index (LAI). The objective of this work was the development of a non-destructive methodology for the LAI estimation in wine growing. This method is based on stereo images that allow to obtain a bard 3D representation, in order to facilitate the segmentation process, since to perform this process only based on color component becomes practically impossible due to the high complexity of the application environment. In addition, the Normalized Difference Vegetation Index will be used to distinguish the regions of the trunks and leaves. As an low-cost and non-evasive method, it becomes a promising solution for LAI estimation in order to monitor the productivity changes and the impacts of climatic conditions in the vines growth. © Springer Nature Switzerland AG 2019.

2019

Vineyard Segmentation from Satellite Imagery Using Machine Learning

Authors
Santos, L; Santos, FN; Filipe, V; Shinde, P;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2019, PT I

Abstract
Steep slope vineyards are a complex scenario for the development of ground robots due to the harsh terrain conditions and unstable localization systems. Automate vineyard tasks (like monitoring, pruning, spraying, and harvesting) requires advanced robotic path planning approaches. These approaches usually resort to Simultaneous Localization and Mapping (SLAM) techniques to acquire environment information, which requires previous navigation of the robot through the entire vineyard. The analysis of satellite or aerial images could represent an alternative to SLAM techniques, to build the first version of occupation grid map (needed by robots). The state of the art for aerial vineyard images analysis is limited to flat vineyards with straight vine’s row. This work considers a machine learning based approach (SVM classifier with Local Binary Pattern (LBP) based descriptor) to perform the vineyard segmentation from public satellite imagery. In the experiments with a dataset of satellite images from vineyards of Douro region, the proposed method achieved accuracy over 90%. © Springer Nature Switzerland AG 2019.

  • 7
  • 23