Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2024

Shapley-Based Data Valuation Method for the Machine Learning Data Markets (MLDM)

Authors
Baghcheband, H; Soares, C; Reis, LP;

Publication
Foundations of Intelligent Systems - 27th International Symposium, ISMIS 2024, Poitiers, France, June 17-19, 2024, Proceedings

Abstract

2024

Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 3: VISAPP, Rome, Italy, February 27-29, 2024

Authors
Radeva, P; Furnari, A; Bouatouch, K; de Sousa, AA;

Publication
VISIGRAPP (3): VISAPP

Abstract

2024

Kernel Corrector LSTM

Authors
Tuna, R; Baghoussi, Y; Soares, C; Moreira, JM;

Publication
Advances in Intelligent Data Analysis XXII - 22nd International Symposium on Intelligent Data Analysis, IDA 2024, Stockholm, Sweden, April 24-26, 2024, Proceedings, Part II

Abstract
Forecasting methods are affected by data quality issues in two ways: 1. they are hard to predict, and 2. they may affect the model negatively when it is updated with new data. The latter issue is usually addressed by pre-processing the data to remove those issues. An alternative approach has recently been proposed, Corrector LSTM (cLSTM), which is a Read & Write Machine Learning (RW-ML) algorithm that changes the data while learning to improve its predictions. Despite promising results being reported, cLSTM is computationally expensive, as it uses a meta-learner to monitor the hidden states of the LSTM. We propose a new RW-ML algorithm, Kernel Corrector LSTM (KcLSTM), that replaces the meta-learner of cLSTM with a simpler method: Kernel Smoothing. We empirically evaluate the forecasting accuracy and the training time of the new algorithm and compare it with cLSTM and LSTM. Results indicate that it is able to decrease the training time while maintaining a competitive forecasting accuracy. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.

2024

Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 2: VISAPP, Rome, Italy, February 27-29, 2024

Authors
Radeva, P; Furnari, A; Bouatouch, K; de Sousa, AA;

Publication
VISIGRAPP (2): VISAPP

Abstract

2024

Fusion of Time-of-Flight Based Sensors with Monocular Cameras for a Robotic Person Follower

Authors
Sarmento, J; dos Santos, FN; Aguiar, AS; Filipe, V; Valente, A;

Publication
JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS

Abstract
Human-robot collaboration (HRC) is becoming increasingly important in advanced production systems, such as those used in industries and agriculture. This type of collaboration can contribute to productivity increase by reducing physical strain on humans, which can lead to reduced injuries and improved morale. One crucial aspect of HRC is the ability of the robot to follow a specific human operator safely. To address this challenge, a novel methodology is proposed that employs monocular vision and ultra-wideband (UWB) transceivers to determine the relative position of a human target with respect to the robot. UWB transceivers are capable of tracking humans with UWB transceivers but exhibit a significant angular error. To reduce this error, monocular cameras with Deep Learning object detection are used to detect humans. The reduction in angular error is achieved through sensor fusion, combining the outputs of both sensors using a histogram-based filter. This filter projects and intersects the measurements from both sources onto a 2D grid. By combining UWB and monocular vision, a remarkable 66.67% reduction in angular error compared to UWB localization alone is achieved. This approach demonstrates an average processing time of 0.0183s and an average localization error of 0.14 meters when tracking a person walking at an average speed of 0.21 m/s. This novel algorithm holds promise for enabling efficient and safe human-robot collaboration, providing a valuable contribution to the field of robotics.

2024

Proceedings of the 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2024, Volume 1: GRAPP, HUCAPP and IVAPP, Rome, Italy, February 27-29, 2024

Authors
Rogers, TB; Méneveaux, D; Ziat, M; Ammi, M; Jänicke, S; Purchase, HC; Bouatouch, K; de Sousa, AA;

Publication
VISIGRAPP (1): GRAPP, HUCAPP, IVAPP

Abstract

  • 11
  • 3789