2025
Autores
Nunes, JD; Montezuma, D; Oliveira, D; Pereira, T; Cardoso, JS;
Publicação
MEDICAL IMAGE ANALYSIS
Abstract
Nuclear-derived morphological features and biomarkers provide relevant insights regarding the tumour microenvironment, while also allowing diagnosis and prognosis in specific cancer types. However, manually annotating nuclei from the gigapixel Haematoxylin and Eosin (H&E)-stained Whole Slide Images (WSIs) is a laborious and costly task, meaning automated algorithms for cell nuclei instance segmentation and classification could alleviate the workload of pathologists and clinical researchers and at the same time facilitate the automatic extraction of clinically interpretable features for artificial intelligence (AI) tools. But due to high intra- and inter-class variability of nuclei morphological and chromatic features, as well as H&Estains susceptibility to artefacts, state-of-the-art algorithms cannot correctly detect and classify instances with the necessary performance. In this work, we hypothesize context and attention inductive biases in artificial neural networks (ANNs) could increase the performance and generalization of algorithms for cell nuclei instance segmentation and classification. To understand the advantages, use-cases, and limitations of context and attention-based mechanisms in instance segmentation and classification, we start by reviewing works in computer vision and medical imaging. We then conduct a thorough survey on context and attention methods for cell nuclei instance segmentation and classification from H&E-stained microscopy imaging, while providing a comprehensive discussion of the challenges being tackled with context and attention. Besides, we illustrate some limitations of current approaches and present ideas for future research. As a case study, we extend both a general (Mask-RCNN) and a customized (HoVer-Net) instance segmentation and classification methods with context- and attention-based mechanisms and perform a comparative analysis on a multicentre dataset for colon nuclei identification and counting. Although pathologists rely on context at multiple levels while paying attention to specific Regions of Interest (RoIs) when analysing and annotating WSIs, our findings suggest translating that domain knowledge into algorithm design is no trivial task, but to fully exploit these mechanisms in ANNs, the scientific understanding of these methods should first be addressed.
2025
Autores
Caldeira, E; Neto, PC; Huber, M; Damer, N; Sequeira, AF;
Publicação
INFORMATION FUSION
Abstract
The development of deep learning algorithms has extensively empowered humanity's task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression techniques that drastically reduce the computational and memory costs of deep learning models without significant performance degradation. These compressed models are especially essential when implementing multi-model fusion solutions where multiple models are required to operate simultaneously. This paper aims to systematize the current literature on this topic by presenting a comprehensive survey of model compression techniques in biometrics applications, namely quantization, knowledge distillation and pruning. We conduct a critical analysis of the comparative value of these techniques, focusing on their advantages and disadvantages and presenting suggestions for future work directions that can potentially improve the current methods. Additionally, we discuss and analyze the link between model bias and model compression, highlighting the need to direct compression research toward model fairness in future works.
2024
Autores
Silva S.M.; Almeida N.T.;
Publicação
2024 IEEE 22nd Mediterranean Electrotechnical Conference, MELECON 2024
Abstract
The rapid proliferation of Internet of Things (IoT) systems, encompassing a wide range of devices and sensors with limited battery life, has highlighted the critical need for energy-efficient solutions to extend the operational lifespan of these battery-powered devices.One effective strategy for reducing energy consumption is minimizing the number and size of retransmitted packets in case of communication errors. Among the potential solutions, Incremental Redundancy Hybrid Automatic Repeat reQuest (IR-HARQ) communication schemes have emerged as particularly compelling options by adopting the best aspects of error control, namely, automatic repetition and variable redundancy.This work addresses the challenge by developing a simulator capable of executing and analysing several (H)ARQ schemes using different channel models, such as the Additive White Gaussian Noise (AWGN) and Gilbert-Elliott (GE) models. The primary objective is to compare their performance across multiple metrics, enabling a thorough evaluation of their capabilities.The results indicate that IR-HARQ outperforms alternative methods, especially in the presence of burst errors. Furthermore, its potential for further adaptation and enhancement opens up new ways for optimizing energy consumption and extending the lifespan of battery-powered IoT devices.
2024
Autores
Avelar, H; Ferreira, JC;
Publicação
2024 IEEE 22nd Mediterranean Electrotechnical Conference, MELECON 2024
Abstract
This paper proposes a method to avoid using a CORDIC or external memory to process the steering vectors to calculate the pseudospectrum of correlation-based beamforming algorithms. We show that if we decompose the steering vector equation, the size of the matrix to be saved in memory becomes independent of the antenna array size. Besides, the amount of data needed is small enough to be saved in the internal block RAMs of the FPGA SoC. Besides, this method greatly reduces the number of memory accesses, by offloading some processing to hardware, while keeping the frequency at 300MHz with a precision of 0.25°. Finally, we show that this approach is scalable since the complexity grows logarithmically for bigger arrays, and the symmetry in the matrices obtained allows even more compact data. © 2024 IEEE.
2024
Autores
Teixeira, FB; Ricardo, M; Coelho, A; Oliveira, HP; Viana, P; Paulino, N; Fontes, H; Marques, P; Campos, R; Pessoa, LM;
Publicação
CoRR
Abstract
2024
Autores
Queirós, G; Correia, P; Coelho, A; Ricardo, M;
Publicação
2024 19TH WIRELESS ON-DEMAND NETWORK SYSTEMS AND SERVICES CONFERENCE, WONS
Abstract
Over the years, mobile networks were deployed using monolithic hardware based on proprietary solutions. Recently, the concept of open Radio Access Networks (RANs), including the standards and specifications from O-RAN Alliance, has emerged. It aims at enabling open, interoperable networks based on independent virtualized components connected through open interfaces. This paves the way to collect metrics and to control the RAN components by means of software applications such as the O-RAN-specified xApps. We propose a private standalone network leveraged by a mobile RAN employing the O-RAN architecture. The mobile RAN consists of a radio node (gNB) carried by a Mobile Robotic Platform autonomously positioned to provide on-demand wireless connectivity. The proposed solution employs a novel Mobility Management xApp to collect and process metrics from the RAN, while using an original algorithm to define the placement of the mobile RAN. This allows for the improvement of the connectivity offered to the User Equipments.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.