2023
Autores
Mosiichuk, V; Sampaio, A; Viana, P; Oliveira, T; Rosado, L;
Publicação
APPLIED SCIENCES-BASEL
Abstract
Liquid-based cytology (LBC) plays a crucial role in the effective early detection of cervical cancer, contributing to substantially decreasing mortality rates. However, the visual examination of microscopic slides is a challenging, time-consuming, and ambiguous task. Shortages of specialized staff and equipment are increasing the interest in developing artificial intelligence (AI)-powered portable solutions to support screening programs. This paper presents a novel approach based on a RetinaNet model with a ResNet50 backbone to detect the nuclei of cervical lesions on mobile-acquired microscopic images of cytology samples, stratifying the lesions according to The Bethesda System (TBS) guidelines. This work was supported by a new dataset of images from LBC samples digitalized with a portable smartphone-based microscope, encompassing nucleus annotations of 31,698 normal squamous cells and 1395 lesions. Several experiments were conducted to optimize the model's detection performance, namely hyperparameter tuning, transfer learning, detected class adjustments, and per-class score threshold optimization. The proposed nucleus-based methodology improved the best baseline reported in the literature for detecting cervical lesions on microscopic images exclusively acquired with mobile devices coupled to the & mu;SmartScope prototype, with per-class average precision, recall, and F1 scores up to 17.6%, 22.9%, and 16.0%, respectively. Performance improvements were obtained by transferring knowledge from networks pre-trained on a smaller dataset closer to the target application domain, as well as including normal squamous nuclei as a class detected by the model. Per-class tuning of the score threshold also allowed us to obtain a model more suitable to support screening procedures, achieving F1 score improvements in most TBS classes. While further improvements are still required to use the proposed approach in a clinical context, this work reinforces the potential of using AI-powered mobile-based solutions to support cervical cancer screening. Such solutions can significantly impact screening programs worldwide, particularly in areas with limited access and restricted healthcare resources.
2023
Autores
Sulun, S; Oliveira, P; Viana, P;
Publicação
PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II
Abstract
We present a new large-scale emotion-labeled symbolic music dataset consisting of 12 k MIDI songs. To create this dataset, we first trained emotion classification models on the GoEmotions dataset, achieving state-of-the-art results with a model half the size of the baseline. We then applied these models to lyrics from two large-scale MIDI datasets. Our dataset covers a wide range of fine-grained emotions, providing a valuable resource to explore the connection between music and emotions and, especially, to develop models that can generate music based on specific emotions. Our code for inference, trained models, and datasets are available online.
2023
Autores
Reis, N; da Silva, JM; Correia, MV;
Publicação
REMOTE SENSING
Abstract
The increased demand for and use of autonomous driving and advanced driver assistance systems has highlighted the issue of abnormalities occurring within the perception layers, some of which may result in accidents. Recent publications have noted the lack of standardized independent testing formats and insufficient methods with which to analyze, verify, and qualify LiDAR (Light Detection and Ranging)-acquired data and their subsequent labeling. While camera-based approaches benefit from a significant amount of long-term research, images captured through the visible spectrum can be unreliable in situations with impaired visibility, such as dim lighting, fog, and heavy rain. A redoubled focus upon LiDAR usage would combat these shortcomings; however, research involving the detection of anomalies and the validation of gathered data is few and far between when compared to its counterparts. This paper aims to contribute to expand the knowledge on how to evaluate LiDAR data by introducing a novel method with the ability to detect these patterns and complement other performance evaluators while using a statistical approach. Although it is preliminary, the proposed methodology shows promising results in the evaluation of an algorithm's confidence score, the impact that weather and road conditions may have on data, and fringe cases in which the data may be insufficient or otherwise unusable.
2023
Autores
Karri, C; da Silva, JM; Correia, MV;
Publicação
IEEE ACCESS
Abstract
Perception algorithms are essential for autonomous or semi-autonomous vehicles to perceive the semantics of their surroundings, including object detection, panoptic segmentation, and tracking. Decision-making in case of safety-critical situations, like autonomous emergency braking and collision avoidance, relies on the outputs of these algorithms. This makes it essential to correctly assess such perception systems before their deployment and to monitor their performance when in use. It is difficult to test and validate these systems, particularly at runtime, due to the high-level and complex representations of their outputs. This paper presents an overview of different existing metrics used for the evaluation of LiDAR-based perception systems, emphasizing particularly object detection and tracking algorithms due to their importance in the final perception outcome. Along with generally used metrics, we also discuss the impact of Planning KL-Divergence (PKL), Timed Quality Temporal Logic (TQTL), and Spatio-temporal Quality Logic (STQL) metrics on object detection algorithms. In the case of panoptic segmentation, Panoptic Quality (PQ) and Parsing Covering (PC) metrics are analysed resorting to some pretrained models. Finally, it addresses the application of diverse metrics to evaluate different pretrained models with the respective perception algorithms on publicly available datasets. Besides the identification of the various metrics being proposed, their performance and influence on models are also assessed after conducting new tests or reproducing the experimental results of the reference under consideration.
2023
Autores
Éric Pereira Silva de Oliveira; F Maligno; José Machado da Silva; Susana João Oliveira; Maria Helena Figueiral;
Publicação
Abstract
2023
Autores
Ramos, P; Oliveira, JM; Kourentzes, N; Fildes, R;
Publicação
APPLIED SYSTEM INNOVATION
Abstract
Retailers depend on accurate forecasts of product sales at the Store x SKU level to efficiently manage their inventory. Consequently, there has been increasing interest in identifying more advanced statistical techniques that lead to accuracy improvements. However, the inclusion of multiple drivers affecting demand into commonly used ARIMA and ETS models is not straightforward, particularly when many explanatory variables are available. Moreover, regularization regression models that shrink the model's parameters allow for the inclusion of a lot of relevant information but do not intrinsically handle the dynamics of the demand. These problems have not been addressed by previous studies. Nevertheless, multiple simultaneous effects interacting are common in retailing. To be successful, any approach needs to be automatic, robust and efficiently scaleable. In this study, we design novel approaches to forecast retailer product sales taking into account the main drivers which affect SKU demand at store level. To address the variable selection challenge, the use of dimensionality reduction via principal components analysis (PCA) and shrinkage estimators was investigated. The empirical results, using a case study of supermarket sales in Portugal, show that both PCA and shrinkage are useful and result in gains in forecast accuracy in the order of 10% over benchmarks while offering insights on the impact of promotions. Focusing on the promotional periods, PCA-based models perform strongly, while shrinkage estimators over-shrink. For the non-promotional periods, shrinkage estimators significantly outperform the alternatives.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.