Cookies Policy
The website need some cookies and similar means to function. If you permit us, we will use those means to collect data on your visits for aggregated statistics to improve our service. Find out More
Accept Reject
  • Menu
Publications

2025

A Risk Manager for Intrusion Tolerant Systems: Enhancing HAL 9000 With New Scoring and Data Sources

Authors
Freitas, T; Novo, C; Dutra, I; Soares, J; Correia, ME; Shariati, B; Martins, R;

Publication
SOFTWARE-PRACTICE & EXPERIENCE

Abstract
Background Intrusion Tolerant Systems (ITS) aim to maintain system security despite adversarial presence by limiting the impact of successful attacks. Current ITS risk managers rely heavily on public databases like NVD and Exploit DB, which suffer from long delays in vulnerability evaluation, reducing system responsiveness.Objective This work extends the HAL 9000 Risk Manager to integrate additional real-time threat intelligence sources and employ machine learning techniques to automatically predict and reassess vulnerability risk scores, addressing limitations of existing solutions.Methods A custom-built scraper collects diverse cybersecurity data from multiple Open Source Intelligence (OSINT) platforms, such as NVD, CVE, AlienVault OTX, and OSV. HAL 9000 uses machine learning models for CVE score prediction, vulnerability clustering through scalable algorithms, and reassessment incorporating exploit likelihood and patch availability to dynamically evaluate system configurations.Results Integration of newly scraped data significantly enhances the risk management capabilities, enabling faster detection and mitigation of emerging vulnerabilities with improved resilience and security. Experiments show HAL 9000 provides lower risk and more resilient configurations compared to prior methods while maintaining scalability and automation.Conclusions The proposed enhancements position HAL 9000 as a next-generation autonomous Risk Manager capable of effectively incorporating diverse intelligence sources and machine learning to improve ITS security posture in dynamic threat environments. Future work includes expanding data sources, addressing misinformation risks, and real-world deployments.

2025

Hyper-Personalised Marketing with Generative AI and Predictive Models: A Systematic Review

Authors
Pires, PB; Santos, JD; Torres, AI;

Publication
Advances in Computational Intelligence and Robotics - Adapting Global Communication and Marketing Strategies to Generative AI

Abstract
This chapter examines how GenAI and predictive modelling strategies affect hyperpersonalised marketing. Through a comprehensive literature review and case studies, it examines hyper-p ersonalisation's theoretical frameworks, technical infrastructures, and ethical and governance issues. Large language models, generative adversarial networks, and diffusion models combined with advanced predictive analytics allow firms to scale real- time, highly individualised customer experiences. Effective implementation requires sophisticated data architectures, algorithmic transparency, and strong privacy protections. Integration complexity and ethical accountability are major barriers to consumer engagement and conversion, according to the research. Based on these findings, the chapter proposes an integrated framework that combines technological innovation with ethics and customer focus. This research advances marketing theory and provides practical advice for companies using AI- driven hyper-personalisation while maintaining consumer trust and regulatory compliance. © 2026, IGI Global Scientific Publishing. All rights reserved.

2025

Predicting demand for new products in fashion retailing using censored data

Authors
Sousa, MS; Loureiro, ALD; Miguéis, VL;

Publication
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
In today's highly competitive fashion retail market, it is crucial to have accurate demand forecasting systems, namely for new products. Many experts have used machine learning techniques to forecast product sales. However, sales that do not happen due to lack of product availability are often ignored, resulting in censored demand and service levels that are lower than expected. Motivated by the relevance of this issue, we developed a two-stage approach to forecast the demand for new products in the fashion retail industry. In the first stage, we compared four methods of transforming historical sales into historical demand for products already commercialized. Three methods used sales-weighted averages to estimate demand on the days with stock-outs, while the fourth method employed an Expectation-Maximization (EM) algorithm to account for potential substitute products affected by stock-outs of preferred products. We then evaluated the performance of these methods and selected the most accurate one for calculating the primary demand for these historical products. In the second stage, we predicted the demand for the products of the following collection using Random Forest, Deep Neural Networks, and Support Vector Regression algorithms. In addition, we applied a model that consisted of weighting the demands previously calculated for the products of past collections that were most similar to the new products. We validated the proposed methodology using a European fashion retailer case study. The results revealed that the method using the Expectation-Maximization algorithm had the highest potential, followed by the Random Forest algorithm. We believe that this approach will lead to more assertive and better-aligned decisions in production management.

2025

Pruning End-Effectors State of the Art Review

Authors
Oliveira, F; Tinoco, V; Valente, A; Pinho, T; Cunha, JB; Santos, FN;

Publication
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2024, PT I

Abstract
Pruning consists on an agricultural trimming procedure that is crucial in some species of plants to promote healthy growth and increased yield. Generally, this task is done through manual labour, which is costly, physically demanding, and potentially dangerous for the worker. Robotic pruning is an automated alternative approach to manual labour on this task. This approach focuses on selective pruning and requires the existence of an end-effector capable of detecting and cutting the correct point on the branch to achieve efficient pruning. This paper reviews and analyses different end-effectors used in robotic pruning, which helped to understand the advantages and limitations of the different techniques used and, subsequently, clarified the work required to enable autonomous pruning.

2025

Agile Processes in Software Engineering and Extreme Programming - Workshops - XP 2024 Workshops, Bozen-Bolzano, Italy, June 4-7, 2024, Revised Selected Papers

Authors
Marchesi, L; Goldman, A; Lunesu, MI; Przybylek, A; Aguiar, A; Morgan, L; Wang, X; Pinna, A;

Publication
XP Workshops

Abstract

2025

CompRep: A Dataset For Computational Reproducibility

Authors
Costa, L; Barbosa, S; Cunha, J;

Publication
PROCEEDINGS OF THE 3RD ACM CONFERENCE ON REPRODUCIBILITY AND REPLICABILITY, ACM REP 2025

Abstract
Reproducibility in computational science is increasingly dependent on the ability to faithfully re-execute experiments involving code, data, and software environments. However, assessing the effectiveness of reproducibility tools is difficult due to the lack of standardized benchmarks. To address this, we collected 38 computational experiments from diverse scientific domains and attempted to reproduce each using 8 different reproducibility tools. From this initial pool, we identified 18 experiments that could be successfully reproduced using at least one tool. These experiments form our curated benchmark dataset, which we release along with reproducibility packages to support ongoing evaluation efforts. This article introduces the curated dataset, incorporating details about software dependencies, execution steps, and configurations necessary for accurate reproduction. The dataset is structured to reflect diverse computational requirements and methodologies, ranging from simple scripts to complex, multi-language workflows, ensuring it presents the wide range of challenges researchers face in reproducing computational studies. It provides a universal benchmark by establishing a standardized dataset for objectively evaluating and comparing the effectiveness of reproducibility tools. Each experiment included in the dataset is carefully documented to ensure ease of use. We added clear instructions following a standard, so each experiment has the same kind of instructions, making it easier for researchers to run each of them with their own reproducibility tool.The utility of the dataset is demonstrated through extensive evaluations using multiple reproducibility tools.

  • 16
  • 4353