Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

2024

Smart Adjustable Furniture – An EPS@ISEP 2023 Project

Autores
Pronczuk, A; Mertz Revol, C; Hinzpeter, J; Smeets, J; Chmielik, M; Duarte, J; Malheiro, B; Ribeiro, C; Justo, J; Silva, F; Ferreira, P; Guedes, P;

Publicação
Lecture Notes in Educational Technology

Abstract
Small living spaces require ingenious solutions that are functional, ergonomic and, above all, reconfigurable. This project for smart, ergonomic and adjustable furniture was embraced by a team of students from different countries, universities and study areas enrolled in the European Project Semester (EPS) at Instituto Superior de Engenharia do Porto (ISEP). EPS is a design project where international students work in teams to create a solution to a real problem from scratch, analysing the state of the art, the market and the associated ethical and sustainability issues. As a project-based learning process, EPS aims to prepare engineering students to work together in multidisciplinary teams, develop personal skills and address the challenges of the contemporary world. The current project aims to design, simulate and test an ethically and sustainability-driven safe and transformable furniture. Amplea is the adjustable furniture solution developed by five EPS students in spring 2023. It transforms into a kitchen counter, dining table or standing desk. By transforming easily, Amplea’s design provides more comfort and saves space in small living spaces. This paper summarises the research, the design of the solution and the development and testing of the proof-of-concept prototype. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.

2024

Application of Meta Learning in Quality Assessment of Wearable Electrocardiogram Recordings

Autores
Huerta, A; Martínez-Rodrigo, A; Guimarâes, M; Carneiro, D; Rieta, JJ; Alcaraz, R;

Publicação
ADVANCES IN DIGITAL HEALTH AND MEDICAL BIOENGINEERING, VOL 2, EHB-2023

Abstract
The high rates of mortality provoked by cardiovascular disorders (CVDs) have been rated by the OMS in the top among non-communicable diseases, killing about 18 million people annually. It is crucial to detect arrhythmias or cardiovascular events in an early way. For that purpose, novel portable acquisition devices have allowed long-term electrocardiographic (ECG) recording, being the most common way to discover arrhythmias of a random nature such as atrial fibrillation (AF). Nonetheless, the acquisition environment can distort or even destroy the ECG recordings, hindering the proper diagnosis of CVDs. Thus, it is necessary to assess the ECG signal quality in an automatic way. The proposed approach exploits the feature and meta-feature extraction of 5-s ECG segments with the ability of machine learning classifiers to discern between high- and low-quality ECG segments. Three different approaches were tested, reaching values of accuracy close to 83% using the original feature set and improving up to 90% when all the available meta-features were utilized. Moreover, within the high-quality group, the segments belonging to the AF class outperformed around 7% until a rate over 85% when the meta-features set was used. The extraction of meta-features improves the accuracy even when a subset of meta-features is selected from the whole set.

2024

Hybrid time-spatial video saliency detection method to enhance human action recognition systems

Autores
Gharahbagh, AA; Hajihashemi, V; Ferreira, MC; Machado, JJM; Tavares, JMRS;

Publicação
MULTIMEDIA TOOLS AND APPLICATIONS

Abstract
Since digital media has become increasingly popular, video processing has expanded in recent years. Video processing systems require high levels of processing, which is one of the challenges in this field. Various approaches, such as hardware upgrades, algorithmic optimizations, and removing unnecessary information, have been suggested to solve this problem. This study proposes a video saliency map based method that identifies the critical parts of the video and improves the system's overall performance. Using an image registration algorithm, the proposed method first removes the camera's motion. Subsequently, each video frame's color, edge, and gradient information are used to obtain a spatial saliency map. Combining spatial saliency with motion information derived from optical flow and color-based segmentation can produce a saliency map containing both motion and spatial data. A nonlinear function is suggested to properly combine the temporal and spatial saliency maps, which was optimized using a multi-objective genetic algorithm. The proposed saliency map method was added as a preprocessing step in several Human Action Recognition (HAR) systems based on deep learning, and its performance was evaluated. Furthermore, the proposed method was compared with similar methods based on saliency maps, and the superiority of the proposed method was confirmed. The results show that the proposed method can improve HAR efficiency by up to 6.5% relative to HAR methods with no preprocessing step and 3.9% compared to the HAR method containing a temporal saliency map.

2024

Deep Learning Approaches for Socially Contextualized Acoustic Event Detection in Social Media Posts

Autores
Hajihashemi, V; Gharahbagh, AA; Ferreira, MC; Machado, JJM; Tavares, JMRS;

Publicação
GOOD PRACTICES AND NEW PERSPECTIVES IN INFORMATION SYSTEMS AND TECHNOLOGIES, VOL 6, WORLDCIST 2024

Abstract
In recent years, social media platforms have become an essential source of information. Therefore, with their increasing popularity, there is a growing need for effective methods for detecting and analyzing their content in real time. Deep learning is a machine learning technique that teaches computers to understand complex patterns. Deep learning techniques are promising for analyzing acoustic signals from social media posts. In this article, a novel deep learning approach is proposed for socially contextualized event detection based on acoustic signals. The approach integrates the power of deep learning and meaningful features such as Mel frequency cepstral coefficients. To evaluate the effectiveness of the proposed method, it was applied to a real dataset collected from social protests in Iran. The results show that the proposed system can find a protester's clip with an accuracy of approximately 82.57%. Thus, the proposed approach has the potential to significantly improve the accuracy of systems for filtering social media posts.

2024

Cattle Monitoring Blimp – An EPS@ISEP 2023 Project

Autores
Blommestijn, K; Dallongeville, K; Paulsen, M; Mamos, M; Gupta, S; Duarte, J; Malheiro, B; Ribeiro, C; Justo, J; Silva, F; Ferreira, P; Guedes, P;

Publicação
Lecture Notes in Educational Technology

Abstract
This paper describes the project based learning experience of a multidisciplinary and multicultural team of students enrolled in the spring of 2023 on the European Project Semester at the Instituto Superior de Engenharia do Porto (EPS@ISEP). Animo is an original blimp based concept that aims to help farmers better manage their livestock. Its development was motivated by the difficulty to effectively monitor cattle herds over vast areas, especially in remote locations where locating animals is challenging. This environmentally friendly solution offers real-time livestock monitoring without thermal engines. Real-time monitoring is achieved through the blimp’s extensive animal data collection. Farmers may discover and handle quickly herd welfare issues by accessing information via a user-friendly App. With an emphasis on accessibility and environmental sustainability, Animo seeks to increase agricultural productivity and profitability. The user controls the blimp motion through the app to obtain a comprehensive farm view. Targeting Australia’s large cattle stations, it aims to enhance productivity while minimising the environmental impact. © The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.

2024

Automation of optical tweezers: an enabler for single cell analysis and diagnostic

Autores
Jorge, P; Teixeira, J; Rocha, V; Ribeiro, J; Silva, N;

Publicação
BIOPHOTONICS IN POINT-OF-CARE III

Abstract
Sensing at the single cell level can provide insights into its dynamics and heterogeneity, yielding information otherwise unattainable with traditional biological methods where average population behavior is observed. In this context, optical tweezers provide the ability to select, separate, manipulate and identify single cells or other types of microparticles, potentially enabling single cell diagnostics. Forward or backscatter analysis of the light interacting with the trapped cells can provide valuable insights on the cell optical, geometrical and mechanical properties. In particular, the combination of tweezers systems with advanced machine learning algorithms can enable single cell identification capabilities. However, typical processing pipelines require a training stage which often struggles when trying to generalize to new sets of data. In this context, fully automated tweezers system can provide mechanisms to obtain much larger datasets with minimum effort form the users, while eliminating procedural variability. In this work, a pipeline for full automation of optical tweezers systems is discussed. A performance comparison between manually operated and fully automated tweezers systems is presented, clearly showing advantages of the latter. A case study demonstrating the ability of the system to discriminate molecular binding events on microparticles is presented.

  • 75
  • 4038