Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

Publications by CTM

2025

Exploring Motion Information in Homography Calculation for Football Matches With Moving Cameras

Authors
Gomes, C; Mastralexi, C; Carvalho, P;

Publication
IEEE ACCESS

Abstract
In football, where minor differences can significantly affect outcomes and performance, automatic video analysis has become a critical tool for analyzing and optimizing team strategies. However, many existing solutions require expensive and complex hardware comprising multiple cameras, sensors, or GPS devices, limiting accessibility for many clubs, particularly those with limited resources. Using images and video from a moving camera can help a wider audience benefit from video analysis, but it introduces new challenges related to motion. To address this, we explore an alternative homography estimation in moving camera scenarios. Homography plays a crucial role in video analysis, but presents challenges when keypoints are sparse, especially in dynamic environments. Existing techniques predominantly rely on visible keypoints and apply homography transformations on a frame-by-frame basis, often lacking temporal consistency and facing challenges in areas with sparse keypoints. This paper explores the use of estimated motion information for homography computation. Our experimental results reveal that integrating motion data directly into homography estimations leads to reduced errors in keypoint-sparse frames, surpassing state-of-the-art methods, filling a current gap in moving camera scenarios.

2025

Causal representation learning through higher-level information extraction

Authors
Silva, F; Oliveira, HP; Pereira, T;

Publication
ACM COMPUTING SURVEYS

Abstract
The large gap between the generalization level of state-of-the-art machine learning and human learning systems calls for the development of artificial intelligence (AI) models that are truly inspired by human cognition. In tasks related to image analysis, searching for pixel-level regularities has reached a power of information extraction still far from what humans capture with image-based observations. This leads to poor generalization when even small shifts occur at the level of the observations. We explore a perspective on this problem that is directed to learning the generative process with causality-related foundations, using models capable of combining symbolic manipulation, probabilistic reasoning, and pattern recognition abilities. We briefly review and explore connections of research from machine learning, cognitive science, and related fields of human behavior to support our perspective for the direction to more robust and human-like artificial learning systems.

2025

AI-based models to predict decompensation on traumatic brain injury patients

Authors
Ribeiro, R; Neves, I; Oliveira, HP; Pereira, T;

Publication
Comput. Biol. Medicine

Abstract
Traumatic Brain Injury (TBI) is a form of brain injury caused by external forces, resulting in temporary or permanent impairment of brain function. Despite advancements in healthcare, TBI mortality rates can reach 30%–40% in severe cases. This study aims to assist clinical decision-making and enhance patient care for TBI-related complications by employing Artificial Intelligence (AI) methods and data-driven approaches to predict decompensation. This study uses learning models based on sequential data from Electronic Health Records (EHR). Decompensation prediction was performed based on 24-h in-mortality prediction at each hour of the patient's stay in the Intensive Care Unit (ICU). A cohort of 2261 TBI patients was selected from the MIMIC-III dataset based on age and ICD-9 disease codes. Logistic Regressor (LR), Long-short term memory (LSTM), and Transformers architectures were used. Two sets of features were also explored combined with missing data strategies by imputing the normal value, data imbalance techniques with class weights, and oversampling. The best performance results were obtained using LSTMs with the original features with no unbalancing techniques and with the added features and class weight technique, with AUROC scores of 0.918 and 0.929, respectively. For this study, using EHR time series data with LSTM proved viable in predicting patient decompensation, providing a helpful indicator of the need for clinical interventions. © 2025 Elsevier Ltd

2025

Comparing 2D and 3D Feature Extraction Methods for Lung Adenocarcinoma Prediction Using CT Scans: A Cross-Cohort Study

Authors
Gouveia, M; Mendes, T; Rodrigues, EM; Oliveira, HP; Pereira, T;

Publication
APPLIED SCIENCES-BASEL

Abstract
Lung cancer stands as the most prevalent and deadliest type of cancer, with adenocarcinoma being the most common subtype. Computed Tomography (CT) is widely used for detecting tumours and their phenotype characteristics, for an early and accurate diagnosis that impacts patient outcomes. Machine learning algorithms have already shown the potential to recognize patterns in CT scans to classify the cancer subtype. In this work, two distinct pipelines were employed to perform binary classification between adenocarcinoma and non-adenocarcinoma. Firstly, radiomic features were classified by Random Forest and eXtreme Gradient Boosting classifiers. Next, a deep learning approach, based on a Residual Neural Network and a Transformer-based architecture, was utilised. Both 2D and 3D CT data were initially explored, with the Lung-PET-CT-Dx dataset being employed for training and the NSCLC-Radiomics and NSCLC-Radiogenomics datasets used for external evaluation. Overall, the 3D models outperformed the 2D ones, with the best result being achieved by the Hybrid Vision Transformer, with an AUC of 0.869 and a balanced accuracy of 0.816 on the internal test set. However, a lack of generalization capability was observed across all models, with the performances decreasing on the external test sets, a limitation that should be studied and addressed in future work.

2025

Efficient-Proto-Caps: A Parameter-Efficient and Interpretable Capsule Network for Lung Nodule Characterization

Authors
Rodrigues, EM; Gouveia, M; Oliveira, HP; Pereira, T;

Publication
IEEE Access

Abstract
Deep learning techniques have demonstrated significant potential in computer-assisted diagnosis based on medical imaging. However, their integration into clinical workflows remains limited, largely due to concerns about interpretability. To address this challenge, we propose Efficient-Proto-Caps, a lightweight and inherently interpretable model that combines capsule networks with prototype learning for lung nodule characterization. Additionally, an innovative Davies-Bouldin Index with multiple centroids per cluster is employed as a loss function to promote clustering of lung nodule visual attribute representations. When evaluated on the LIDC-IDRI dataset, the most widely recognized benchmark for lung cancer prediction, our model achieved an overall accuracy of 89.7 % in predicting lung nodule malignancy and associated visual attributes. This performance is statistically comparable to that of the baseline model, while utilizing a backbone with only approximately 2 % of the parameters of the baseline model’s backbone. State-of-the-art models achieved better performance in lung nodule malignancy prediction; however, our approach relies on multiclass malignancy predictions and provides a decision rationale aligned with globally accepted clinical guidelines. These results underscore the potential of our approach, as the integration of lightweight and less complex designs into accurate and inherently interpretable models represents a significant advancement toward more transparent and clinically viable computer-assisted diagnostic systems. Furthermore, these findings highlight the model’s potential for broader applicability, extending beyond medicine to other domains where final classifications are grounded in concept-based or example-based attributes. © 2013 IEEE.

2025

Generative adversarial networks with fully connected layers to denoise PPG signals

Authors
Castro, IAA; Oliveira, HP; Correia, R; Hayes-Gill, B; Morgan, SP; Korposh, S; Gomez, D; Pereira, T;

Publication
PHYSIOLOGICAL MEASUREMENT

Abstract
Objective.The detection of arterial pulsating signals at the skin periphery with Photoplethysmography (PPG) are easily distorted by motion artifacts. This work explores the alternatives to the aid of PPG reconstruction with movement sensors (accelerometer and/or gyroscope) which to date have demonstrated the best pulsating signal reconstruction. Approach. A generative adversarial network with fully connected layers is proposed for the reconstruction of distorted PPG signals. Artificial corruption was performed to the clean selected signals from the BIDMC Heart Rate dataset, processed from the larger MIMIC II waveform database to create the training, validation and testing sets. Main results. The heart rate (HR) of this dataset was further extracted to evaluate the performance of the model obtaining a mean absolute error of 1.31 bpm comparing the HR of the target and reconstructed PPG signals with HR between 70 and 115 bpm. Significance. The model architecture is effective at reconstructing noisy PPG signals regardless the length and amplitude of the corruption introduced. The performance over a range of HR (70-115 bpm), indicates a promising approach for real-time PPG signal reconstruction without the aid of acceleration or angular velocity inputs.

  • 10
  • 379