2025
Autores
Sousa, MS; Loureiro, ALD; Miguéis, VL;
Publicação
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
In today's highly competitive fashion retail market, it is crucial to have accurate demand forecasting systems, namely for new products. Many experts have used machine learning techniques to forecast product sales. However, sales that do not happen due to lack of product availability are often ignored, resulting in censored demand and service levels that are lower than expected. Motivated by the relevance of this issue, we developed a two-stage approach to forecast the demand for new products in the fashion retail industry. In the first stage, we compared four methods of transforming historical sales into historical demand for products already commercialized. Three methods used sales-weighted averages to estimate demand on the days with stock-outs, while the fourth method employed an Expectation-Maximization (EM) algorithm to account for potential substitute products affected by stock-outs of preferred products. We then evaluated the performance of these methods and selected the most accurate one for calculating the primary demand for these historical products. In the second stage, we predicted the demand for the products of the following collection using Random Forest, Deep Neural Networks, and Support Vector Regression algorithms. In addition, we applied a model that consisted of weighting the demands previously calculated for the products of past collections that were most similar to the new products. We validated the proposed methodology using a European fashion retailer case study. The results revealed that the method using the Expectation-Maximization algorithm had the highest potential, followed by the Random Forest algorithm. We believe that this approach will lead to more assertive and better-aligned decisions in production management.
2025
Autores
Caldeira, E; Neto, PC; Huber, M; Damer, N; Sequeira, AF;
Publicação
INFORMATION FUSION
Abstract
The development of deep learning algorithms has extensively empowered humanity's task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression techniques that drastically reduce the computational and memory costs of deep learning models without significant performance degradation. These compressed models are especially essential when implementing multi-model fusion solutions where multiple models are required to operate simultaneously. This paper aims to systematize the current literature on this topic by presenting a comprehensive survey of model compression techniques in biometrics applications, namely quantization, knowledge distillation and pruning. We conduct a critical analysis of the comparative value of these techniques, focusing on their advantages and disadvantages and presenting suggestions for future work directions that can potentially improve the current methods. Additionally, we discuss and analyze the link between model bias and model compression, highlighting the need to direct compression research toward model fairness in future works.
2025
Autores
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;
Publicação
MEDICAL IMAGE ANALYSIS
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
2025
Autores
Homayouni, SM; Fontes, DBMM;
Publicação
INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH
Abstract
This paper addresses a job shop scheduling problem with peak power constraints, in which jobs can be processed once or multiple times on either all or a subset of the machines. The latter characteristic provides additional flexibility, nowadays present in many manufacturing systems. The problem is complicated by the need to determine both the operation sequence and starting time as well as the speed at which machines process each operation. Due to the adherence to renewable energy production and its intermittent nature, manufacturing companies need to adopt power-flexible production schedules. The proposed power control strategies, that is, adjusting processing speed and timing to reduce peak power requirements may impact production time (makespan) and energy consumption. Therefore, we propose a bi-objective approach that minimizes both objectives. A linear programming model is developed to provide a formal statement of the problem, which is solved to optimality for small-sized instances. We also proposed a multi-objective biased random key genetic algorithm framework that evolves several populations in parallel. Computational experiments provide decision and policymakers with insights into the implications of imposing or negotiating power consumption limits. Finally, the several trade-off solutions obtained show that as the power limit is lowered, the makespan increases at an increasing rate and a similar trend is observed in energy consumption but only for very small makespan values. Furthermore, peak power demand reductions of about 25% have a limited impact on the minimum makespan value (4-6% increase), while at the same time allowing for a small reduction in energy consumption.
2025
Autores
Martins, AR; Ferreira, MC; Fernandes, CS;
Publicação
International Journal of Medical Informatics
Abstract
2025
Autores
Martins, AR; Ferreira, MC; Fernandes, CS;
Publicação
International Journal of Medical Informatics
Abstract
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.