Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Filipe Vamonde Oliveira

2024

Fabric Defect Detection and Localization

Autores
Oliveira, F; Carneiro, D; Ferreira, H; Guimaraes, M;

Publicação
ADVANCES IN ARTIFICIAL INTELLIGENCE IN MANUFACTURING, ESAIM 2023

Abstract
Quality inspection is crucial in the textile industry as it ensures that the final products meet the required standards. It helps detect and address defects, such as fabric flaws and stitching irregularities, enhancing customer satisfaction, and optimizing production efficiency by identifying areas of improvement, reducing waste, and minimizing rework. In the competitive textile market, it is vital for maintaining customer loyalty, brand reputation, and sustained success. Nonetheless, and despite the importance of quality inspection, it is becoming increasingly harder to hire and train people for such tedious and repetitive tasks. In this context, there is an increased interest in automated quality control techniques that can be used in the industrial domain. In this paper we describe a computer vision model for localizing and classifying different types of defects in textiles. The model developed achieved an mAP@0.5 of 0.96 on the validation dataset. While this model was trained with a publicly available dataset, we will soon use the same architecture with images collected from Jacquard looms in the context of a funded research project. This paper thus represents an initial validation of the model for the purposes of fabric defect detection.

2025

Using Explanations to Estimate the Quality of Computer Vision Models

Autores
Oliveira, F; Carneiro, D; Pereira, J;

Publicação
Springer Proceedings in Business and Economics

Abstract
Explainable AI (xAI) emerged as one of the ways of addressing the interpretability issues of the so-called black-box models. Most of the xAI artifacts proposed so far were designed, as expected, for human users. In this work, we posit that such artifacts can also be used by computer systems. Specifically, we propose a set of metrics derived from LIME explanations, that can eventually be used to ascertain the quality of each output of an underlying image classification model. We validate these metrics against quantitative human feedback, and identify 4 potentially interesting metrics for this purpose. This research is particularly useful in concept drift scenarios, in which models are deployed into production and there is no new labelled data to continuously evaluate them, becoming impossible to know the current performance of the model. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.

  • 2
  • 2