2024
Authors
Cojocaru, I; Coelho, A; Ricardo, M;
Publication
2024 20TH INTERNATIONAL CONFERENCE ON WIRELESS AND MOBILE COMPUTING, NETWORKING AND COMMUNICATIONS, WIMOB
Abstract
The Integrated Access and Backhaul (IAB) 5G network architecture, introduced in 3GPP Release 16, leverages a shared 5G spectrum for both access and backhaul networks. Due to the novelty of IAB, there is a lack of suitable implementations and performance evaluations. This paper addresses this gap by proposing EMU-IAB, a mobility emulator for private standalone 5G IAB networks. The proposed emulation environment comprises a 5G Core Network, an IAB-enabled Radio Access Network (RAN), leveraging the Open-RAN (O-RAN) architecture. The RAN includes a fixed IAB Donor, a mobile IAB Node, and multiple User Equipments (UEs). The mobility of the IAB Node is managed through EMU-IAB, which allows defining the path loss of emulated wireless channels. The validation of EMU-IAB was conducted under a realistic IAB node mobility scenario, addressing different traffic demand from the UEs.
2024
Authors
Teixeira, FB; Simoes, C; Fidalgo, P; Pedrosa, W; Coelho, A; Ricardo, M; Pessoa, LM;
Publication
2024 IEEE GLOBECOM WORKSHOPS, GC WKSHPS
Abstract
Telecommunications and computer vision have evolved independently. With the emergence of high-frequency wireless links operating mostly in line-of-sight, visual data can help predict the channel dynamics by detecting obstacles and help overcoming them through beamforming or handover techniques. This paper proposes a novel architecture for delivering real-time radio and video sensing information to O-RAN xApps through a multi-agent approach, and introduces a new video function capable of generating blockage information for xApps, enabling Integrated Sensing and Communications. Experimental results show that the delay of sensing information remains under 1 ms and that an xApp can successfully use radio and video sensing information to control the 5G/6G RAN in real-time.
2024
Authors
Pereira, B; Cunha, B; Viana, P; Lopes, M; Melo, ASC; Sousa, ASP;
Publication
SENSORS
Abstract
Shoulder rehabilitation is a process that requires physical therapy sessions to recover the mobility of the affected limbs. However, these sessions are often limited by the availability and cost of specialized technicians, as well as the patient's travel to the session locations. This paper presents a novel smartphone-based approach using a pose estimation algorithm to evaluate the quality of the movements and provide feedback, allowing patients to perform autonomous recovery sessions. This paper reviews the state of the art in wearable devices and camera-based systems for human body detection and rehabilitation support and describes the system developed, which uses MediaPipe to extract the coordinates of 33 key points on the patient's body and compares them with reference videos made by professional physiotherapists using cosine similarity and dynamic time warping. This paper also presents a clinical study that uses QTM, an optoelectronic system for motion capture, to validate the methods used by the smartphone application. The results show that there are statistically significant differences between the three methods for different exercises, highlighting the importance of selecting an appropriate method for specific exercises. This paper discusses the implications and limitations of the findings and suggests directions for future research.
2024
Authors
Sulun, S; Viana, P; Davies, MEP;
Publication
EXPERT SYSTEMS WITH APPLICATIONS
Abstract
We introduce a novel method for movie genre classification, capitalizing on a diverse set of readily accessible pretrained models. These models extract high-level features related to visual scenery, objects, characters, text, speech, music, and audio effects. To intelligently fuse these pretrained features, we train small classifier models with low time and memory requirements. Employing the transformer model, our approach utilizes all video and audio frames of movie trailers without performing any temporal pooling, efficiently exploiting the correspondence between all elements, as opposed to the fixed and low number of frames typically used by traditional methods. Our approach fuses features originating from different tasks and modalities, with different dimensionalities, different temporal lengths, and complex dependencies as opposed to current approaches. Our method outperforms state-of-the-art movie genre classification models in terms of precision, recall, and mean average precision (mAP). To foster future research, we make the pretrained features for the entire MovieNet dataset, along with our genre classification code and the trained models, publicly available.
2024
Authors
Dias, J; Oliper, D; Soares, MR; Viana, P;
Publication
2024 IEEE 22ND MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, MELECON 2024
Abstract
This paper addresses the critical challenge of optimising beacon placement to support indoor location services and proposes a methodology to optimise the Base Station (BS) coverage keeping or even improving the system precision. The algorithm builds on top of the building schematics and takes into account several aspects that affect the radio link range (materials attenuation, Line of Sight (LOS) conditions, transmitted power and radio sensibility). The outcome is available as a coverage heat map. It is then compared with a standard layout following existing expert guidelines to evaluate the efficacy of the proposed layout.
2024
Authors
Sulun, S; Viana, P; Davies, MEP;
Publication
IEEE International Symposium on Multimedia, ISM 2024, Tokyo, Japan, December 11-13, 2024
Abstract
We introduce VEMOCLAP: Video EMOtion Classifier using Pretrained features, the first readily available and open-source web application that analyzes the emotional content of any user-provided video. We improve our previous work, which exploits open-source pretrained models that work on video frames and audio, and then efficiently fuse the resulting pretrained features using multi-head cross-attention. Our approach increases the state-of-the-art classification accuracy on the Ekman-6 video emotion dataset by 4.3% and offers an online application for users to run our model on their own videos or YouTube videos. We invite the readers to try our application at serkansulun.com/app.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.