Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by HumanISE

2024

Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower

Authors
Sarmento, J; dos Santos, FN; Aguiar, AS; Filipe, V; Valente, A;

Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.

2024

Maximising Attendance in Higher Education: How AI and Gamification Strategies Can Boost Student Engagement and Participation

Authors
Limonova, V; dos Santos, AMP; Sao Mamede, JHP; Filipe, VMD;

Publication
GOOD PRACTICES AND NEW PERSPECTIVES IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 4, WORLDCIST 2024

Abstract
The decline in student attendance and engagement in Higher Education (HE) is a pressing concern for educational institutions worldwide. Traditional lecture-style teaching is no longer effective, and students often become disinterested and miss classes, impeding their academic progress. While Gamification has improved learning outcomes, the integration of Artificial Intelligence (AI) has the potential to revolutionise the educational experience. The combination of AI and Gamification offers numerous research opportunities and paves the way for updated academic approaches to increase student engagement and attendance. Extensive research has been conducted to uncover the correlation between student attendance and engagement in HE. Studies consistently reveal that regular attendance leads to better academic performance. On the other hand, absenteeism can lead to disengagement and poor academic performance, stunting a student's growth and success. This position paper proposes integrating Gamification and AI to improve attendance and engagement. The approach involves incorporating game-like elements into the learning process to make it more interactive and rewarding. AI-powered tools can track student progress and provide personalised feedback, motivating students to stay engaged. This approach fosters a more engaging and fruitful educational journey, leading to better learning outcomes. This position paper will inspire further research in AI-Gamification integration, leading to innovative teaching methods that enhance student engagement and attendance in HE.

2024

Assessing Soil Ripping Depth for Precision Forestry with a Cost-Effective Contactless Sensing System

Authors
da Silva, DQ; Louro, F; dos Santos, FN; Filipe, V; Sousa, AJ; Cunha, M; Carvalho, JL;

Publication
ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE, VOL 2

Abstract
Forest soil ripping is a practice that involves revolving the soil in a forest area to prepare it for planting or sowing operations. Advanced sensing systems may help in this kind of forestry operation to assure ideal ripping depth and intensity, as these are important aspects that have potential to minimise the environmental impact of forest soil ripping. In this work, a cost-effective contactless system - capable of detecting and mapping soil ripping depth in real-time - was developed and tested in laboratory and in a realistic forest scenario. The proposed system integrates two single-point LiDARs and a GNSS sensor. To evaluate the system, ground-truth data was manually collected on the field during the operation of the machine with a ripping implement. The proposed solution was tested in real conditions, and the results showed that the ripping depth was estimated with minimal error. The accuracy and mapping ripping depth ability of the low-cost sensor justify their use to support improved soil preparation with machines or robots toward sustainable forest industry.

2024

Understanding the Impact of Perceived Challenge on Narrative Immersion in Video Games: The Role-Playing Game Genre as a Case Study

Authors
Domingues, JM; Filipe, V; Carita, A; Carvalho, V;

Publication
INFORMATION

Abstract
This paper explores the intricate interplay between perceived challenge and narrative immersion within role-playing game (RPG) video games, motivated by the escalating influence of game difficulty on player choices. A quantitative methodology was employed, utilizing three specific questionnaires for data collection on player habits and experiences, perceived challenge, and narrative immersion. The study consisted of two interconnected stages: an initial research phase to identify and understand player habits, followed by an in-person intervention involving the playing of three distinct RPG video games. During this intervention, selected players engaged with the chosen RPG video games separately, and after each session, responded to two surveys assessing narrative immersion and perceived challenge. The study concludes that a meticulous adjustment of perceived challenge by video game studios moderately influences narrative immersion, reinforcing the enduring prominence of the RPG genre as a distinctive choice in narrative.

2024

YOLO-Based Tree Trunk Types Multispectral Perception: A Two-Genus Study at Stand-Level for Forestry Inventory Management Purposes

Authors
da Silva, DQ; Dos Santos, FN; Filipe, V; Sousa, AJ; Pires, EJS;

Publication
IEEE ACCESS

Abstract
Stand-level forest tree species perception and identification are needed for monitoring-related operations, being crucial for better biodiversity and inventory management in forested areas. This paper contributes to this knowledge domain by researching tree trunk types multispectral perception at stand-level. YOLOv5 and YOLOv8 - Convolutional Neural Networks specialized at object detection and segmentation - were trained to detect and segment two tree trunk genus (pine and eucalyptus) using datasets collected in a forest region in Portugal. The dataset comprises only two categories, which correspond to the two tree genus. The datasets were manually annotated for object detection and segmentation with RGB and RGB-NIR images, and are publicly available. The Small variant of YOLOv8 was the best model at detection and segmentation tasks, achieving an F1 measure above 87% and 62%, respectively. The findings of this study suggest that the use of extended spectra, including Visible and Near Infrared, produces superior results. The trained models can be integrated into forest tractors and robots to monitor forest genus across different spectra. This can assist forest managers in controlling their forest stands.

2024

Deep Learning-Based Hip Detection in Pelvic Radiographs

Authors
Loureiro, C; Filipe, V; Franco-Gonçalo, P; Pereira, AI; Colaço, B; Alves-Pimenta, S; Ginja, M; Gonçalves, L;

Publication
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023

Abstract
Radiography is the primary modality for diagnosing canine hip dysplasia (CHD), with visual assessment of radiographic features sometimes used for accurate diagnosis. However, these features typically constitute small regions of interest (ROI) within the overall image, yet they hold vital diagnostic information and are crucial for pathological analysis. Consequently, automated detection of ROIs becomes a critical preprocessing step in classification or segmentation systems. By correctly extracting the ROIs, the efficiency of retrieval and identification of pathological signs can be significantly improved. In this research study, we employed the most recent iteration of the YOLO (version 8) model to detect hip joints in a dataset of 133 pelvic radiographs. The best-performing model achieved a mean average precision (mAP50:95) of 0.81, indicating highly accurate detection of hip regions. Importantly, this model displayed feasibility for training on a relatively small dataset and exhibited promising potential for various medical applications.

  • 20
  • 641