Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2020

Autonomous Driving Car Competition

Authors
Alves, JP; Fonseca Ferreira, NMF; Valente, A; Soares, S; Filipe, V;

Publication
ROBOTICS IN EDUCATION: CURRENT RESEARCH AND INNOVATIONS

Abstract
This paper presents the construction of an autonomous robot to participating in the autonomous driving competition of the National Festival of Robotics in Portugal, which relies on an open platform requiring basic knowledge of robotics, like mechanics, control, computer vision and energy management. The projet is an excellent way for teaching robotics concepts to engineering students, once the platform endows students with an intuitive learning for current technologies, development and testing of new algorithms in the area of mobile robotics and also in generating good team-building.

2020

UAV Landing Using Computer Vision Techniques for Human Detection

Authors
Safadinho, D; Ramos, J; Ribeiro, R; Filipe, V; Barroso, J; Pereira, A;

Publication
SENSORS

Abstract
The capability of drones to perform autonomous missions has led retail companies to use them for deliveries, saving time and human resources. In these services, the delivery depends on the Global Positioning System (GPS) to define an approximate landing point. However, the landscape can interfere with the satellite signal (e.g., tall buildings), reducing the accuracy of this approach. Changes in the environment can also invalidate the security of a previously defined landing site (e.g., irregular terrain, swimming pool). Therefore, the main goal of this work is to improve the process of goods delivery using drones, focusing on the detection of the potential receiver. We developed a solution that has been improved along its iterative assessment composed of five test scenarios. The built prototype complements the GPS through Computer Vision (CV) algorithms, based on Convolutional Neural Networks (CNN), running in a Raspberry Pi 3 with a Pi NoIR Camera (i.e., No InfraRed-without infrared filter). The experiments were performed with the models Single Shot Detector (SSD) MobileNet-V2, and SSDLite-MobileNet-V2. The best results were obtained in the afternoon, with the SSDLite architecture, for distances and heights between 2.5-10 m, with recalls from 59%-76%. The results confirm that a low computing power and cost-effective system can perform aerial human detection, estimating the landing position without an additional visual marker.

2020

Vineyard trunk detection using deep learning - An experimental device benchmark

Authors
Pinto de Aguiar, ASP; Neves dos Santos, FBN; Feliz dos Santos, LCF; de Jesus Filipe, VMD; Miranda de Sousa, AJM;

Publication
COMPUTERS AND ELECTRONICS IN AGRICULTURE

Abstract
Research and development in mobile robotics are continuously growing. The ability of a human-made machine to navigate safely in a given environment is a challenging task. In agricultural environments, robot navigation can achieve high levels of complexity due to the harsh conditions that they present. Thus, the presence of a reliable map where the robot can localize itself is crucial, and feature extraction becomes a vital step of the navigation process. In this work, the feature extraction issue in the vineyard context is solved using Deep Learning to detect high-level features - the vine trunks. An experimental performance benchmark between two devices is performed: NVIDIA's Jetson Nano and Google's USB Accelerator. Several models were retrained and deployed on both devices, using a Transfer Learning approach. Specifically, MobileNets, Inception, and lite version of You Only Look Once are used to detect vine trunks in real-time. The models were retrained in a built in-house dataset, that is publicly available. The training dataset contains approximately 1600 annotated vine trunks in 336 different images. Results show that NVIDIA's Jetson Nano provides compatibility with a wider variety of Deep Learning architectures, while Google's USB Accelerator is limited to a unique family of architectures to perform object detection. On the other hand, the Google device showed an overall Average precision higher than Jetson Nano, with a better runtime performance. The best result obtained in this work was an average precision of 52.98% with a runtime performance of 23.14 ms per image, for MobileNet-V2. Recent experiments showed that the detectors are suitable for the use in the Localization and Mapping context.

2020

Robotics services at home support

Authors
Crisostomo, L; Ferreira, NMF; Filipe, V;

Publication
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS

Abstract
This article proposes a robotic system that aims to support the elderly, to comply with the medication regimen to which they are subject. The robot uses its locomotion system to move to the elderly and through computer vision detects the packaging of the medicine and identifies the person who should take it at the correct time. For the accomplishment of the task, an application was developed supported by a database with information about the elderly, the medicines that they have prescribed and the respective timetable of taking. The experimental work was done with the robot NAO, using development tools like MySQL, Python, and OpenCV. The elderly facial identification and the detection of medicine packing are performed through computer vision algorithms that process the images acquired by the robot's camera. Experiments were carried out to evaluate the performance of object recognition, facial detection, and facial recognition algorithms, using public databases. The tests made it possible to obtain qualitative metrics about the algorithms' performance. A proof of concept experiment was conducted in a simple scenario that recreates the environment of a dwelling with seniors who are assisted by the robot in the taking of medicines.

2020

Individual's Neutral Emotional Expression Tracking for Physical Exercise Monitoring

Authors
Khanal, SR; Sampaio, J; Barroso, J; Filipe, V;

Publication
HCI International 2020 - Late Breaking Papers: Multimodality and Intelligence - 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19-24, 2020, Proceedings

Abstract
Facial expression analysis is a widespread technology applied in various research areas, including sports science. In the last few decades, facial expression analysis has become a key technology for monitoring physical exercise. In this paper, a deep neural network is proposed to recognize seven basic emotions and their corresponding probability values (scores). The score of the neutral emotion was tracked throughout the exercise and related with heart rate and power generation by a stationary bicycle. It was found that in a certain power range, a participant changes his/her expression drastically. Twelve university students participated in the sub-maximal physical exercise in stationary bicycles. A facial video, heart rate,and power generation were recorded throughout the exercise. All the experiments, including the facial expression analysis, were carried out offline. The score of the neutral emotion and its derivative was plotted against maxHR% and maxPower%. The threshold point was determined by calculating the local minima, with the threshold power for all the participants being within 80% to 90% of its maximum value. From the results, it is concluded that the facial expression was different from one individual to another, but it was more consistant with power generation. The threshold point can be a useful cue for various purposes, such as: physiological parameter prediction and automatic load control in the exercise equipment, such as treadmill and stationary bicycle. © 2020, Springer Nature Switzerland AG.

2020

A Clustering Approach for Prediction of Diabetic Foot Using Thermal Images

Authors
Filipe, V; Teixeira, P; Teixeira, A;

Publication
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS - ICCSA 2020, PT III

Abstract
Diabetes Mellitus (DM) is one of the most predominant diseases in the world, causing a high number of deaths. Diabetic foot is one of the main complications observed in diabetic patients, which can lead to the development of ulcers. As the risk of ulceration is directly linked to an increase of the temperature in the plantar region, several studies use thermography as a method for automatic identification of problems in diabetic foot. As the distribution of plantar temperature of diabetic patients do not follow a specific pattern, it is difficult to measure temperature changes and, therefore, there is an interest in the development of methods that allow the detection of these abnormal changes. The objective of this work is to develop a methodology that uses thermograms of the feet of diabetic and healthy individuals and analyzes the thermal changes diversity in the plantar region, classifying each foot as belonging to a DM or a healthy individual. Based on the concept of clustering, a binary classifier to predict diabetic foot is presented; both a quantitative indicator and a classification thresholder (evaluated and validated by several performance metrics) are presented. To measure the binary classifier performance, experiments were conducted on a public dataset (with 122 images of DM individuals and 45 of healthy ones), being obtained the following metrics: Sensitivity = 0.73, Fmeasure = 0.81 and AUC = 0.84.

  • 157
  • 589