2023
Autores
Torresan, C; Bernardes, G; Caetano, E; Restivo, T;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
Stress-ribbon footbridges are often prone to excessive vibrations induced by environmental phenomena (e.g., wind) and human actions (e.g., walking). This paper studies a stress-ribbon footbridge at the Faculty of Engineering of the University of Porto (FEUP) in Portugal, where different degrees of vertical vibrations are perceptible in response to human actions. We adopt sonification techniques to create a sonic manifestation that shows the footbridge’s dynamic response to human interaction. Two distinct sonification techniques – audification and parameter mapping – are adopted to provide intuitive access to the footbridge dynamics from low-level acceleration data and higher-level spectral analysis. In order to evaluate the proposed sonification techniques in exposing relevant information about human actions on the footbridge, an online perceptual test was conducted to assess the understanding of the three following dimensions: 1) the number of people interacting with the footbridge, 2) their walking speed, and 3) the steadiness of their pace. The online perceptual test was conducted with and without a short training phase. Results of n= 23 participants show that parameter mapping sonification is more effective in promoting an intuitive understating of the footbridge dynamics compared to audification. Furthermore, when exposed to a short training phase, the participants’ perception improved in identifying the correct dimensions. © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
2023
Autores
Forero, J; Bernardes, G; Mendes, M;
Publicação
Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST
Abstract
Language is closely related to how we perceive ourselves and signify our reality. In this scope, we created Desiring Machines, an interactive media art project that allows the experience of affective virtual environments adopting speech emotion recognition as the leading input source. Participants can share their emotions by speaking, singing, reciting poetry, or making any vocal sounds to generate virtual environments on the run. Our contribution combines two machine learning models. We propose a long-short term memory and a convolutional neural network to predict four main emotional categories from high-level semantic and low-level paralinguistic acoustic features. Predicted emotions are mapped to audiovisual representations by an end-to-end process encoding emotion in virtual environments. We use a generative model of chord progressions to transfer speech emotion into music based on the tonal interval space. Also, we implement a generative adversarial network to synthesize an image from the transcribed speech-to-text. The generated visuals are used as the style image in the style-transfer process onto an equirectangular projection of a spherical panorama selected for each emotional category. The result is an immersive virtual space encapsulating emotions in spheres disposed into a 3D environment. Users can create new affective representations or interact with other previously encoded instances (This ArtsIT publication is an extended version of the earlier abstract presented at the ACM MM22 [1]). © 2023, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.
2023
Autores
Clemente, MP; Mendes, J; Bernardes, G; Van Twillert, H; Ferreira, AP; Amarante, JM;
Publicação
JOURNAL OF INTERNATIONAL MEDICAL RESEARCH
Abstract
This paper presents a clinical case study investigating the pattern of a saxophonist's embouchure as a possible origin of orofacial pain. The rehabilitation addressed the dental occlusion and a fracture in a metal ceramic bridge. To evaluate the undesirable loads on the upper teeth, two piezoresistive sensors were placed between the central incisors and the mouthpiece during the embouchure. A newly fixed metal ceramic prosthesis was placed from teeth 13 to 25, and two implants were placed in the premolar zone corresponding to teeth 14 and 15. After the oral rehabilitation, the embouchure force measurements showed that higher stability was promoted by the newly fixed metal-ceramic prosthesis. The musician executed a more symmetric loading of the central incisors (teeth 11 and 21). The functional demands of the saxophone player and consequent application of excessive pressure can significantly influence and modify the metal-ceramic position on the anterior zone teeth 21/22. The contribution of engineering (i.e., monitoring the applied forces on the musician's dental structures) was therefore crucial for the correct assessment and design of the treatment plan.
2022
Autores
Bernardes, G; Carvalho, N; Pereira, S;
Publicação
JOURNAL OF NEW MUSIC RESEARCH
Abstract
FluidHarmony is an algorithmic method for defining a hierarchical harmonic lexicon in equal temperaments. It utilizes an enharmonic weighted Fourier transform space to represent pitch class set (pcsets) relations. The method ranks pcsets based on user-defined constraints: the importance of interval classes (ICs) and a reference pcset. Evaluation of 5,184 Western musical pieces from the 16th to 20th centuries shows FluidHarmony captures 8% of the corpus's harmony in its top pcsets. This highlights the role of ICs and a reference pcset in regulating harmony in Western tonal music while enabling systematic approaches to define hierarchies and establish metrics beyond 12-TET.
2010
Autores
Bernardes, G; Guedes, C; Pennycook, B;
Publicação
Proceedings of the 7th Sound and Music Computing Conference, SMC 2010
Abstract
In this paper we present an application using an evolutionary algorithm for real-time generation of polyphonic drum loops in a particular style. The population of rhythms is derived from analysis of MIDI drum loops, which profile each style for subsequent automatic generation of rhythmic patterns that evolve over time through genetic algorithm operators and user input data. © 2010 Gilberto Bernardes et al.
2020
Autores
Ramires, A; Bernardes, G; Davies, MEP; Serra, X;
Publicação
CoRR
Abstract
In this paper, we present TIV.lib, an open-source library for the content-based tonal description of musical audio signals. Its main novelty relies on the perceptually-inspired Tonal Interval Vector space based on the Discrete Fourier transform, from which multiple instantaneous and global representations, descriptors and metrics are computed-e.g., harmonic change, dissonance, diatonicity, and musical key. The library is cross-platform, implemented in Python and the graphical programming language Pure Data, and can be used in both online and offline scenarios. Of note is its potential for enhanced Music Information Retrieval, where tonal descriptors sit at the core of numerous methods and applications.
The access to the final selection minute is only available to applicants.
Please check the confirmation e-mail of your application to obtain the access code.