Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2025

Predicting demand for new products in fashion retailing using censored data

Autores
Sousa, MS; Loureiro, ALD; Miguéis, VL;

Publicação
EXPERT SYSTEMS WITH APPLICATIONS

Abstract
In today's highly competitive fashion retail market, it is crucial to have accurate demand forecasting systems, namely for new products. Many experts have used machine learning techniques to forecast product sales. However, sales that do not happen due to lack of product availability are often ignored, resulting in censored demand and service levels that are lower than expected. Motivated by the relevance of this issue, we developed a two-stage approach to forecast the demand for new products in the fashion retail industry. In the first stage, we compared four methods of transforming historical sales into historical demand for products already commercialized. Three methods used sales-weighted averages to estimate demand on the days with stock-outs, while the fourth method employed an Expectation-Maximization (EM) algorithm to account for potential substitute products affected by stock-outs of preferred products. We then evaluated the performance of these methods and selected the most accurate one for calculating the primary demand for these historical products. In the second stage, we predicted the demand for the products of the following collection using Random Forest, Deep Neural Networks, and Support Vector Regression algorithms. In addition, we applied a model that consisted of weighting the demands previously calculated for the products of past collections that were most similar to the new products. We validated the proposed methodology using a European fashion retailer case study. The results revealed that the method using the Expectation-Maximization algorithm had the highest potential, followed by the Random Forest algorithm. We believe that this approach will lead to more assertive and better-aligned decisions in production management.

2025

Pruning End-Effectors State of the Art Review

Autores
Oliveira, F; Tinoco, V; Valente, A; Pinho, T; Cunha, JB; Santos, N;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
Pruning consists on an agricultural trimming procedure that is crucial in some species of plants to promote healthy growth and increased yield. Generally, this task is done through manual labour, which is costly, physically demanding, and potentially dangerous for the worker. Robotic pruning is an automated alternative approach to manual labour on this task. This approach focuses on selective pruning and requires the existence of an end-effector capable of detecting and cutting the correct point on the branch to achieve efficient pruning. This paper reviews and analyses different end-effectors used in robotic pruning, which helped to understand the advantages and limitations of the different techniques used and, subsequently, clarified the work required to enable autonomous pruning. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Generative Adversarial Networks for Synthetic Meteorological Data Generation

Autores
Viana, D; Teixeira, R; Soares, T; Baptista, J; Pinto, T;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
This study explores models for synthetic data generation of time series. In order to improve the achieved results, i.e., the data generated, new ways of improvement are explored and different models of synthetic data generation are compared. The model addressed in this work is the Generative Adversarial Networks (GANs), known for generating data similar to the original basis data through the training of a generator. The GANs are applied using the datasets of Quinta de Santa Bárbara and the Pinhão region, with the main variables being the Average temperature, Wind direction, Average wind speed, Maximum instantaneous wind speed and Solar radiation. The model allowed to generate missing data in a given period and, in turn, enables to analyze the results and compare them with those of a multiple linear regression method, being able to evaluate the effectiveness of the generated data. In this way, through the study and analysis of the GANs we can see if the model presents effectiveness and accuracy in the synthetic generation of meteorological data. With the proper conclusions of the results, this information can be used in order to improve the search for different models and the ability to generate synthetic time series data, which is representative of the real, original, data. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Forest Fire Risk Prediction Using Machine Learning

Autores
Nogueira, JD; Pires, EJ; Reis, A; de Moura Oliveira, PB; Pereira, A; Barroso, J;

Publicação
Lecture Notes in Networks and Systems

Abstract
With the serious danger to nature and humanity that forest fires are, taken into consideration, this work aims to develop an artificial intelligence model capable of accurately predicting the forest fire risk in a certain region based on four different factors: temperature, wind speed, rain and humidity. Thus, three models were created using three different approaches: Artificial Neural Networks (ANN), Random Forest (RF), and K-Nearest Neighbor (KNN), and making use of an Algerian forest fire dataset. The ANN and RF both achieved high accuracy results of 97%, while the KNN achieved a slightly lower average of 91%. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

2025

Model compression techniques in biometrics applications: A survey

Autores
Caldeira, E; Neto, PC; Huber, M; Damer, N; Sequeira, AF;

Publicação
INFORMATION FUSION

Abstract
The development of deep learning algorithms has extensively empowered humanity's task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression techniques that drastically reduce the computational and memory costs of deep learning models without significant performance degradation. These compressed models are especially essential when implementing multi-model fusion solutions where multiple models are required to operate simultaneously. This paper aims to systematize the current literature on this topic by presenting a comprehensive survey of model compression techniques in biometrics applications, namely quantization, knowledge distillation and pruning. We conduct a critical analysis of the comparative value of these techniques, focusing on their advantages and disadvantages and presenting suggestions for future work directions that can potentially improve the current methods. Additionally, we discuss and analyze the link between model bias and model compression, highlighting the need to direct compression research toward model fairness in future works.

2025

A survey on cell nuclei instance segmentation and classification: Leveraging context and attention

Autores
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;

Publicação
MEDICAL IMAGE ANALYSIS

Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.

  • 1
  • 3947