2019
Autores
Safadinho, D; Ramos, J; Ribeiro, R; Filipe, V; Barroso, J; Pereira, A;
Publicação
Ambient Intelligence - Software and Applications -,10th International Symposium on Ambient Intelligence, ISAmI 2019, Ávila, Spain, 26-28 June 2019.
Abstract
The possibility to engage in autonomous flight through geolocation-based missions turns Unmanned Aerial Vehicles (UAV) into valuable tools that save time and resources in services like deliveries and surveillance. Amazon is already developing a drop-by delivery service, but there are limitations regarding the client’s id, that can be analyzed in three phases: the approach to the potential receiver, the authorization through the client id and the delivery itself. This work shows a solution for the first of these phases. Firstly, the receiver identifies the GPS coordinates where he wants to receive the package. The UAV flights to that place and tries to locate the receiver on the arrival through Computer Vision (CV) techniques, more precisely Deep Neural Networks (DNN), to continue to the next phase, the identification. After the proposal of the system’s architecture and the prototype’s implementation, a test scenario to analyze the feasibility of the proposed techniques was created. The results were quite good considering a system to look for one person in a limited area defined by the destination coordinates, confirming the detection of one person with an up to 92% accuracy from a 10 m height and 5 m horizontal distance in low resolution images. © Springer Nature Switzerland AG 2020.
2020
Autores
Alves, JP; Fonseca Ferreira, NMF; Valente, A; Soares, S; Filipe, V;
Publicação
ROBOTICS IN EDUCATION: CURRENT RESEARCH AND INNOVATIONS
Abstract
This paper presents the construction of an autonomous robot to participating in the autonomous driving competition of the National Festival of Robotics in Portugal, which relies on an open platform requiring basic knowledge of robotics, like mechanics, control, computer vision and energy management. The projet is an excellent way for teaching robotics concepts to engineering students, once the platform endows students with an intuitive learning for current technologies, development and testing of new algorithms in the area of mobile robotics and also in generating good team-building.
2020
Autores
Safadinho, D; Ramos, J; Ribeiro, R; Filipe, V; Barroso, J; Pereira, A;
Publicação
SENSORS
Abstract
The capability of drones to perform autonomous missions has led retail companies to use them for deliveries, saving time and human resources. In these services, the delivery depends on the Global Positioning System (GPS) to define an approximate landing point. However, the landscape can interfere with the satellite signal (e.g., tall buildings), reducing the accuracy of this approach. Changes in the environment can also invalidate the security of a previously defined landing site (e.g., irregular terrain, swimming pool). Therefore, the main goal of this work is to improve the process of goods delivery using drones, focusing on the detection of the potential receiver. We developed a solution that has been improved along its iterative assessment composed of five test scenarios. The built prototype complements the GPS through Computer Vision (CV) algorithms, based on Convolutional Neural Networks (CNN), running in a Raspberry Pi 3 with a Pi NoIR Camera (i.e., No InfraRed-without infrared filter). The experiments were performed with the models Single Shot Detector (SSD) MobileNet-V2, and SSDLite-MobileNet-V2. The best results were obtained in the afternoon, with the SSDLite architecture, for distances and heights between 2.5-10 m, with recalls from 59%-76%. The results confirm that a low computing power and cost-effective system can perform aerial human detection, estimating the landing position without an additional visual marker.
2020
Autores
Pinto de Aguiar, ASP; Neves dos Santos, FBN; Feliz dos Santos, LCF; de Jesus Filipe, VMD; Miranda de Sousa, AJM;
Publicação
COMPUTERS AND ELECTRONICS IN AGRICULTURE
Abstract
Research and development in mobile robotics are continuously growing. The ability of a human-made machine to navigate safely in a given environment is a challenging task. In agricultural environments, robot navigation can achieve high levels of complexity due to the harsh conditions that they present. Thus, the presence of a reliable map where the robot can localize itself is crucial, and feature extraction becomes a vital step of the navigation process. In this work, the feature extraction issue in the vineyard context is solved using Deep Learning to detect high-level features - the vine trunks. An experimental performance benchmark between two devices is performed: NVIDIA's Jetson Nano and Google's USB Accelerator. Several models were retrained and deployed on both devices, using a Transfer Learning approach. Specifically, MobileNets, Inception, and lite version of You Only Look Once are used to detect vine trunks in real-time. The models were retrained in a built in-house dataset, that is publicly available. The training dataset contains approximately 1600 annotated vine trunks in 336 different images. Results show that NVIDIA's Jetson Nano provides compatibility with a wider variety of Deep Learning architectures, while Google's USB Accelerator is limited to a unique family of architectures to perform object detection. On the other hand, the Google device showed an overall Average precision higher than Jetson Nano, with a better runtime performance. The best result obtained in this work was an average precision of 52.98% with a runtime performance of 23.14 ms per image, for MobileNet-V2. Recent experiments showed that the detectors are suitable for the use in the Localization and Mapping context.
2020
Autores
Crisostomo, L; Ferreira, NMF; Filipe, V;
Publicação
INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS
Abstract
This article proposes a robotic system that aims to support the elderly, to comply with the medication regimen to which they are subject. The robot uses its locomotion system to move to the elderly and through computer vision detects the packaging of the medicine and identifies the person who should take it at the correct time. For the accomplishment of the task, an application was developed supported by a database with information about the elderly, the medicines that they have prescribed and the respective timetable of taking. The experimental work was done with the robot NAO, using development tools like MySQL, Python, and OpenCV. The elderly facial identification and the detection of medicine packing are performed through computer vision algorithms that process the images acquired by the robot's camera. Experiments were carried out to evaluate the performance of object recognition, facial detection, and facial recognition algorithms, using public databases. The tests made it possible to obtain qualitative metrics about the algorithms' performance. A proof of concept experiment was conducted in a simple scenario that recreates the environment of a dwelling with seniors who are assisted by the robot in the taking of medicines.
2020
Autores
Mukherjee, R; Melo, M; Filipe, V; Chalmers, A; Bessa, M;
Publicação
IEEE ACCESS
Abstract
Convolution Neural Network (CNN)-based object detection models have achieved unprecedented accuracy in challenging detection tasks. However, existing detection models (detection heads) trained on 8-bits/pixel/channel low dynamic range (LDR) images are unable to detect relevant objects under lighting conditions where a portion of the image is either under-exposed or over-exposed. Although this issue can be addressed by introducing High Dynamic Range (HDR) content and training existing detection heads on HDR content, there are several major challenges, such as the lack of real-life annotated HDR dataset(s) and extensive computational resources required for training and the hyper-parameter search. In this paper, we introduce an alternative backwards-compatible methodology to detect objects in challenging lighting conditions using existing CNN-based detection heads. This approach facilitates the use of HDR imaging without the immediate need for creating annotated HDR datasets and the associated expensive retraining procedure. The proposed approach uses HDR imaging to capture relevant details in high contrast scenarios. Subsequently, the scene dynamic range and wider colour gamut are compressed using HDR to LDR mapping techniques such that the salient highlight, shadow, and chroma details are preserved. The mapped LDR image can then be used by existing pre-trained models to extract relevant features required to detect objects in both the under-exposed and over-exposed regions of a scene. In addition, we also conduct an evaluation to study the feasibility of using existing HDR to LDR mapping techniques with existing detection heads trained on standard detection datasets such as PASCAL VOC and MSCOCO. Results show that the images obtained from the mapping techniques are suitable for object detection, and some of them can significantly outperform traditional LDR images.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.