Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2021

Classification of car parts using deep neural network

Authors
Khanal, SR; Amorim, EV; Filipe, V;

Publication
Lecture Notes in Electrical Engineering

Abstract
Quality automobile inspection is one of the critical application areas to achieve better quality at low cost and can be obtained with the advance computer vision technology. Whether for the quality inspection or the automatic assembly of automobile parts, automatic recognition of automobile parts plays an important role. In this article, vehicle parts are classified using deep neural network architecture designed based on ConvNet. The public dataset available in CompCars [1] were used to train and test a VGG16 deep learning architecture with a fully connected output layer of 8 neurons. The dataset has 20,439 RGB images of eight interior and exterior car parts taken from the front view. The dataset was first separated for training and testing purpose, and again training dataset was divided into training and validation purpose. The average accuracy of 93.75% and highest accuracy of 97.2% of individual parts recognition were obtained. The classification of car parts contributes to various applications, including car manufacturing, model verification, car inspection system, among others. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021.

2021

Engine labels detection for vehicle quality verification in the assembly line: A machine vision approach

Authors
Capela, S; Silva, R; Khanal, SR; Campaniço, AT; Barroso, J; Filipe, V;

Publication
Lecture Notes in Electrical Engineering

Abstract
The automotive industry has an extremely high-quality product standard, not just for the security risks each faulty component can present, but the very brand image it must uphold at all times to stay competitive. In this paper, a prototype model is proposed for smart quality inspection using machine vision. The engine labels are detected using Faster-RCNN and YOLOv3 object detection algorithms. All the experiments were carried out using a custom dataset collected at an automotive assembly plant. Eight engine labels of two brands (Citroën and Peugeot) and more than ten models were detected. The results were evaluated using the metrics Intersection of Union (IoU), mean of Average Precision (mAP), Confusion Matrix, Precision and Recall. The results were validated in three folds. The models were trained using a custom dataset containing images and annotation files collected and prepared manually. Data Augmentation techniques were applied to increase the image diversity. The result without data augmentation was 92.5%, and with it the value was up-to 100%. Faster-RCNN has more accurate results compared to YOLOv3. © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2021.

2021

Low-Cost and Reduced-Size 3D-Cameras Metrological Evaluation Applied to Industrial Robotic Welding Operations

Authors
de Souza, JPC; Rocha, LF; Filipe, VM; Boaventura Cunha, J; Moreira, AP;

Publication
2021 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC)

Abstract
Nowadays, the robotic welding joint estimation, or weld seam tracking, has improved according to the new developments on computer vision technologies. Typically, the advances are focused on solving inaccurate procedures that advent from the manual positioning of the metal parts in welding workstations, especially in SMEs. Robotic arms, endowed with the appropriate perception capabilities, are a viable solution in this context, aiming for enhancing the production system agility whilst not increasing the production set-up time and costs. In this regard, this paper proposes a local perception pipeline to estimate joint welding points using small-sized/low-cost 3D cameras, following an eyes-on-hand approach. A metrological 3D camera comparison between Intel Realsene D435, D415, and ZED Mini is also discussed, proving that the proposed pipeline associated with standard commercial 3D cameras is viable for welding operations in an industrial environment.

2021

Automatic quality inspection in the automotive industry: a hierarchical approach using simulated data

Authors
Rio-Torto, I; Campanico, AT; Pereira, A; Teixeira, LF; Filipe, V;

Publication
2021 IEEE 8th International Conference on Industrial Engineering and Applications (ICIEA)

Abstract

2021

Two-dimensional and three-dimensional techniques for determining the kinematic patterns for hindlimb obstacle avoidance during sheep locomotion

Authors
Diogo, CC; Fonseca, B; de Almeida, FSM; da Costa, LM; Pereira, JE; Filipe, V; Couto, PA; Geuna, S; Armada da Silva, PA; Mauricio, AC; Varejao, ASP;

Publication
CIENCIA RURAL

Abstract
Analysis of locomotion is often used as a measure for impairment and recovery following experimental peripheral nerve injury. Compared to rodents, sheep offer several advantages for studying peripheral nerve regeneration. In the present study, we compared for the first time, two-dimensional (2D) and three-dimensional (3D) hindlimb kinematics during obstacle avoidance in the ovine model. This study obtained kinematic data to serve as a template for an objective assessment of the ankle joint motion in future studies of common peroneal nerve (CP) injury and repair in the ovine model. The strategy used by the sheep to bring the hindlimb over a moderately high obstacle, set to 10% of its hindlimb length, was pronounced knee, ankle and metatarsophalangeal flexion when approaching and clearing the obstacle. Despite the overall time course kinematic patterns about the hip, knee, ankle, and metatarsophalangeal were identical, we found significant differences between values of the 2D and 3D joint angular motion. Our results showed that the most apparent changes that occurred during the gait cycle were for the ankle (2D-measured STANCEmax: 157 +/- 2.4 degrees vs. 3D-measured STANCEmax: 151 +/- 1.2 degrees; P<.05) and metatarsophalangeal joints (2D-measured STANCEmin: 151 +/- 2.2 degrees vs. 3D-measured STANCEmin: 162 +/- 2.2 degrees; P<.01 and 2D-measured TO: 163 +/- 4.9 degrees vs. 3D-measured TO: 177 +/- 1.4 degrees; P<.05), whereas the hip and knee joints were much less affected. Data and techniques described here are useful for an objective assessment of altered gait after CP injury and repairin an ovine model.

2021

Visible and Thermal Image-Based Trunk Detection with Deep Learning for Forestry Mobile Robotics

Authors
da Silva, DQ; dos Santos, FN; Sousa, AJ; Filipe, V;

Publication
JOURNAL OF IMAGING

Abstract
Mobile robotics in forests is currently a hugely important topic due to the recurring appearance of forest wildfires. Thus, in-site management of forest inventory and biomass is required. To tackle this issue, this work presents a study on detection at the ground level of forest tree trunks in visible and thermal images using deep learning-based object detection methods. For this purpose, a forestry dataset composed of 2895 images was built and made publicly available. Using this dataset, five models were trained and benchmarked to detect the tree trunks. The selected models were SSD MobileNetV2, SSD Inception-v2, SSD ResNet50, SSDLite MobileDet and YOLOv4 Tiny. Promising results were obtained; for instance, YOLOv4 Tiny was the best model that achieved the highest AP (90%) and F1 score (89%). The inference time was also evaluated, for these models, on CPU and GPU. The results showed that YOLOv4 Tiny was the fastest detector running on GPU (8 ms). This work will enhance the development of vision perception systems for smarter forestry robots.

  • 118
  • 589