Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por HumanISE

2024

Decision-making models in the optimization of electric vehicle charging station locations: a review

Autores
Pinto, J; Filipe, V; Baptista, J; Oliveira, A; Pinto, T;

Publicação
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024

Abstract
The number of electric vehicles is increasing progressively for various reasons, including economic and environmental factors. There has also been a technological development regarding both the operation and charging of these vehicles. Therefore, it is very important to reinforce the charging infrastructure, which can be optimised through the application of computational tools. There are several approaches that should be considered when trying to find the best location for electric vehicles charging stations. In the literature, different methods are described that can be applied to address this specific issue, including optimisation methods and decision-making techniques such as multicriteria analysis. One of the possible limitations of these methods is that they may not consider all perspectives of the various entities involved, potentially resulting in solutions that do not fully represent the optimal outcome; nevertheless, they provide invaluable information that can be applied in the development of integrative models and potentially more comprehensive ones. This article presents a research and discussion on the most commonly used decision models for this issue, considering optimisation models and multi-criteria decision-making strategies for the adequate planning of EV charging station installation,taking into account the different perspectives of the involved entities.

2024

Enhancing Grapevine Node Detection to Support Pruning Automation: Leveraging State-of-the-Art YOLO Detection Models for 2D Image Analysis

Autores
Oliveira, F; da Silva, DQ; Filipe, V; Pinho, TM; Cunha, M; Cunha, JB; dos Santos, FN;

Publicação
SENSORS

Abstract
Automating pruning tasks entails overcoming several challenges, encompassing not only robotic manipulation but also environment perception and detection. To achieve efficient pruning, robotic systems must accurately identify the correct cutting points. A possible method to define these points is to choose the cutting location based on the number of nodes present on the targeted cane. For this purpose, in grapevine pruning, it is required to correctly identify the nodes present on the primary canes of the grapevines. In this paper, a novel method of node detection in grapevines is proposed with four distinct state-of-the-art versions of the YOLO detection model: YOLOv7, YOLOv8, YOLOv9 and YOLOv10. These models were trained on a public dataset with images containing artificial backgrounds and afterwards validated on different cultivars of grapevines from two distinct Portuguese viticulture regions with cluttered backgrounds. This allowed us to evaluate the robustness of the algorithms on the detection of nodes in diverse environments, compare the performance of the YOLO models used, as well as create a publicly available dataset of grapevines obtained in Portuguese vineyards for node detection. Overall, all used models were capable of achieving correct node detection in images of grapevines from the three distinct datasets. Considering the trade-off between accuracy and inference speed, the YOLOv7 model demonstrated to be the most robust in detecting nodes in 2D images of grapevines, achieving F1-Score values between 70% and 86.5% with inference times of around 89 ms for an input size of 1280 x 1280 px. Considering these results, this work contributes with an efficient approach for real-time node detection for further implementation on an autonomous robotic pruning system.

2024

Playing Tic-Tac-Toe with Dobot Magician: An Experiment to Engage Students for Engineering Studies

Autores
Oliveira, D; Filipe, V; Oliveira, PM;

Publicação
Lecture Notes in Educational Technology

Abstract
Encouraging pre-university students to pursue engineering courses at the university level is essential to meet the industry’s escalating demand for engineers. Each year, universities host hundreds of secondary students who tour their facilities to get a feel for the academic environment. This paper discusses an educational experiment designed as part of a semester-long undergraduate project in Informatics Engineering. The project involves tailoring a Dobot Magician robot, equipped with a standard webcam, to engage in a game of tic-tac-toe against a human user. The camera stream is continuously processed by a computer vision algorithm to detect the pieces placement in the game board. The paper outlines the project development stages, the elements involved, and presents preliminary test results. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.

2024

Automated Assessment of Pelvic Longitudinal Rotation Using Computer Vision in Canine Hip Dysplasia Screening

Autores
Franco-Gonçalo, P; Leite, P; Alves-Pimenta, S; Colaço, B; Gonçalves, L; Filipe, V; Mcevoy, F; Ferreira, M; Ginja, M;

Publicação
VETERINARY SCIENCES

Abstract
Canine hip dysplasia (CHD) screening relies on accurate positioning in the ventrodorsal hip extended (VDHE) view, as even mild pelvic rotation can affect CHD scoring and impact breeding decisions. This study aimed to assess the association between pelvic rotation and asymmetry in obturator foramina areas (AOFAs) and to develop a computer vision model for automated AOFA measurement. In the first part, 203 radiographs were analyzed to examine the relationship between pelvic rotation, assessed through asymmetry in iliac wing and obturator foramina widths (AOFWs), and AOFAs. A significant association was found between pelvic rotation and AOFA, with AOFW showing a stronger correlation (R-2 = 0.92, p < 0.01). AOFW rotation values were categorized into minimal (n = 71), moderate (n = 41), marked (n = 37), and extreme (n = 54) groups, corresponding to mean AOFA +/- standard deviation values of 33.28 +/- 27.25, 54.73 +/- 27.98, 85.85 +/- 41.31, and 160.68 +/- 64.20 mm(2), respectively. ANOVA and post hoc testing confirmed significant differences in AOFA across these groups (p < 0.01). In part two, the dataset was expanded to 312 images to develop the automated AOFA model, with 80% allocated for training, 10% for validation, and 10% for testing. On the 32 test images, the model achieved high segmentation accuracy (Dice score = 0.96; Intersection over Union = 0.93), closely aligning with examiner measurements. Paired t-tests indicated no significant differences between the examiner and model's outputs (p > 0.05), though the Bland-Altman analysis identified occasional discrepancies. The model demonstrated excellent reliability (ICC = 0.99) with a standard error of 17.18 mm(2). A threshold of 50.46 mm(2) enabled effective differentiation between acceptable and excessive pelvic rotation. With additional training data, further improvements in precision are expected, enhancing the model's clinical utility.

2024

Deep learning-based automated assessment of canine hip dysplasia

Autores
Loureiro, C; Gonçalves, L; Leite, P; Franco Gonçalo, P; Pereira, AI; Colaço, B; Alves Pimenta, S; McEvoy, F; Ginja, M; Filipe, V;

Publicação
Multimedia Tools and Applications

Abstract
Radiographic canine hip dysplasia (CHD) diagnosis is crucial for breeding selection and disease management, delaying progression and alleviating the associated pain. Radiography is the primary imaging modality for CHD diagnosis, and visual assessment of radiographic features is sometimes used for accurate diagnosis. Specifically, alterations in femoral neck shape are crucial radiographic signs, with existing literature suggesting that dysplastic hips have a greater femoral neck thickness (FNT). In this study we aimed to develop a three-stage deep learning-based system that can automatically identify and quantify a femoral neck thickness index (FNTi) as a key metric to improve CHD diagnosis. Our system trained a keypoint detection model and a segmentation model to determine landmark and boundary coordinates of the femur and acetabulum, respectively. We then executed a series of mathematical operations to calculate the FNTi. The keypoint detection model achieved a mean absolute error (MAE) of 0.013 during training, while the femur segmentation results achieved a dice score (DS) of 0.978. Our three-stage deep learning-based system achieved an intraclass correlation coefficient of 0.86 (95% confidence interval) and showed no significant differences in paired t-test compared to a specialist (p > 0.05). As far as we know, this is the initial study to thoroughly measure FNTi by applying computer vision and deep learning-based approaches, which can provide reliable support in CHD diagnosis. © The Author(s) 2024.

2024

Enhancing Medical Imaging Through Data Augmentation: A Review

Autores
Teixeira, B; Pinto, G; Filipe, V; Teixeira, A;

Publicação
COMPUTATIONAL SCIENCE AND ITS APPLICATIONS-ICCSA 2024 WORKSHOPS, PT II

Abstract
This article conducts a comprehensive review of the existing literature on data augmentation and data generation techniques within the context of medical image processing. Addressing the challenges associated with building sizable medical image datasets, including the rarity of certain medical conditions, patient privacy concerns, the need for expert labeling, and the associated expenses, this review focuses on methodologies aimed at enhancing the volume and diversity of available data. Special emphasis is placed on techniques such as data augmentation and data generation, with a particular interest in their application to medical image datasets. The objective is to provide a synthesis of current research, methodologies, and advancements in this domain, offering insights into the state-of-the-art practices and identifying potential avenues for future developments in medical image data augmentation.

  • 21
  • 641