Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Tópicos
de interesse
Detalhes

Detalhes

  • Nome

    Vitor Manuel Filipe
  • Cargo

    Investigador Coordenador
  • Desde

    01 outubro 2012
005
Publicações

2024

Automated Detection of Refilling Stations in Industry Using Unsupervised Learning

Autores
Ribeiro J.; Pinheiro R.; Soares S.; Valente A.; Amorim V.; Filipe V.;

Publicação
Lecture Notes in Mechanical Engineering

Abstract
The manual monitoring of refilling stations in industrial environments can lead to inefficiencies and errors, which can impact the overall performance of the production line. In this paper, we present an unsupervised detection pipeline for identifying refilling stations in industrial environments. The proposed pipeline uses a combination of image processing, pattern recognition, and deep learning techniques to detect refilling stations in visual data. We evaluate our method on a set of industrial images, and the findings demonstrate that the pipeline is reliable at detecting refilling stations. Furthermore, the proposed pipeline can automate the monitoring of refilling stations, eliminating the need for manual monitoring and thus improving industrial operations’ efficiency and responsiveness. This method is a versatile solution that can be applied to different industrial contexts without the need for labeled data or prior knowledge about the location of refilling stations.

2024

An Overview of Explainable Artificial Intelligence in the Industry 4.0 Context

Autores
Teixeira P.; Amorim E.V.; Nagel J.; Filipe V.;

Publicação
Lecture Notes in Mechanical Engineering

Abstract
Artificial intelligence (AI) has gained significant evolution in recent years that, if properly harnessed, may meet or exceed expectations in a wide range of application fields. However, because Machine Learning (ML) models have a black-box structure, end users frequently seek explanations for the predictions made by these learning models. Through tools, approaches, and algorithms, Explainable Artificial Intelligence (XAI) gives descriptions of black-box models to better understand the models’ behaviour and underlying decision-making mechanisms. The AI development in companies enables them to participate in Industry 4.0. The need to inform users of transparent algorithms has given rise to the research field of XAI. This paper provides a brief overview and introduction to the subject of XAI while highlighting why this topic is generating more and more attention in many sectors, such as industry.

2024

Pest Detection in Olive Groves Using YOLOv7 and YOLOv8 Models

Autores
Alves, A; Pereira, J; Khanal, S; Morais, AJ; Filipe, V;

Publicação
OPTIMIZATION, LEARNING ALGORITHMS AND APPLICATIONS, PT II, OL2A 2023

Abstract
Modern agriculture faces important challenges for feeding a fast-growing planet's population in a sustainable way. One of the most important challenges faced by agriculture is the increasing destruction caused by pests to important crops. It is very important to control and manage pests in order to reduce the losses they cause. However, pest detection and monitoring are very resources consuming tasks. The recent development of computer vision-based technology has made it possible to automatize pest detection efficiently. In Mediterranean olive groves, the olive fly (Bactrocera oleae Rossi) is considered the key-pest of the crop. This paper presents olive fly detection using the lightweight YOLO-based model for versions 7 and 8, respectively, YOLOv7-tiny and YOLOv8n. The proposed object detection models were trained, validated, and tested using two different image datasets collected in various locations of Portugal and Greece. The images are constituted by sticky yellow trap photos and by McPhail trap photos with olive fly exemplars. The performance of the models was evaluated using precision, recall, and mAP.95. The YOLOV7-tiny model best performance is 88.3% of precision, 85% of Recall, 90% of mAP.50, and 53% of mAP.95. The YOLOV8n model best performance is 85% of precision, 85% of Recall, 90% mAP.50, and 55% of mAP.50 YOLO8n model achieved worst results than YOLOv7-tiny for a dataset without negative images (images without olive fly exemplars). Aiming at installing an experimental prototype in the olive grove, the YOLOv8n model was implemented in a Ubuntu Server 23.04 Raspberry PI 3 microcomputer.

2024

Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower

Autores
Sarmento, J; dos Santos, FN; Aguiar, AS; Filipe, V; Valente, A;

Publicação
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.

2024

Maximising Attendance in Higher Education: How AI and Gamification Strategies Can Boost Student Engagement and Participation

Autores
Limonova, V; dos Santos, AMP; Sao Mamede, JHP; Filipe, VMD;

Publicação
GOOD PRACTICES AND NEW PERSPECTIVES IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 4, WORLDCIST 2024

Abstract
The decline in student attendance and engagement in Higher Education (HE) is a pressing concern for educational institutions worldwide. Traditional lecture-style teaching is no longer effective, and students often become disinterested and miss classes, impeding their academic progress. While Gamification has improved learning outcomes, the integration of Artificial Intelligence (AI) has the potential to revolutionise the educational experience. The combination of AI and Gamification offers numerous research opportunities and paves the way for updated academic approaches to increase student engagement and attendance. Extensive research has been conducted to uncover the correlation between student attendance and engagement in HE. Studies consistently reveal that regular attendance leads to better academic performance. On the other hand, absenteeism can lead to disengagement and poor academic performance, stunting a student's growth and success. This position paper proposes integrating Gamification and AI to improve attendance and engagement. The approach involves incorporating game-like elements into the learning process to make it more interactive and rewarding. AI-powered tools can track student progress and provide personalised feedback, motivating students to stay engaged. This approach fosters a more engaging and fruitful educational journey, leading to better learning outcomes. This position paper will inspire further research in AI-Gamification integration, leading to innovative teaching methods that enhance student engagement and attendance in HE.

Teses
supervisionadas

2023

Computer vision in industrial processes: A case study at continental advanced antenna

Autor
José Pedro Matos Ribeiro

Instituição
UTAD

2023

Desenvolvimento de microsserviços .Net para serviços de mobilidade

Autor
Filipe Manuel da Silva Valadares

Instituição
UTAD

2023

O impacto do desafio percecionado na imersão narrativa dos videojogos

Autor
José Miguel Vieira Domingues

Instituição
UTAD

2023

ForestMP: Multimodal perception system for robotics in forestry applications

Autor
Daniel Queirós da Silva

Instituição
UTAD

2023

Estratégias e Modelos para Estimular o Engagement de Estudantes no Ensino Superior

Autor
Viktoriya Limonova

Instituição
UTAD