Cookies
O website necessita de alguns cookies e outros recursos semelhantes para funcionar. Caso o permita, o INESC TEC irá utilizar cookies para recolher dados sobre as suas visitas, contribuindo, assim, para estatísticas agregadas que permitem melhorar o nosso serviço. Ver mais
Aceitar Rejeitar
  • Menu
Publicações

Publicações por Gilberto Bernardes Almeida

2014

Considering roughness to describe and generate vertical musical structure in content-based algorithmic-assisted audio composition

Autores
Bernardes, G; Davies, MEP; Guedes, C; Pennycook, B;

Publicação
Proceedings - 40th International Computer Music Conference, ICMC 2014 and 11th Sound and Music Computing Conference, SMC 2014 - Music Technology Meets Philosophy: From Digital Echos to Virtual Ethos

Abstract
This paper examines the correlation between musical dissonance and auditory roughness-the most significant factor of psychoacoustic dissonance- and the contribution of the latter to algorithmic composition. We designed an empirical study to assess how auditory roughness correlates with human judgments of dissonance in natural musical stimuli on the sound object time scale. The results showed a statistically significant correlation between roughness and listeners' judgments of dissonance for quasi-harmonic sounds. This paper concludes by presenting two musical applications of auditory roughness in algorithmic composition, in particular to supervise the vertical recombination of sound objects in the software earGram. Copyright:

2015

earGram Actors: An Interactive Audiovisual System Based on Social Behavior

Autores
Beyls, P; Bernardes, G; Caetano, M;

Publicação
JOURNAL OF SCIENCE AND TECHNOLOGY OF THE ARTS

Abstract
In multi-agent systems, local interactions among system components following relatively simple rules often result in complex overall systemic behavior. Complex behavioral and morphological patterns have been used to generate and organize audiovisual systems with artistic purposes. In this work, we propose to use the Actor model of social interactions to drive a concatenative synthesis engine called earGram in real time. The Actor model was originally developed to explore the emergence of complex visual patterns. In turn, earGram was originally developed to facilitate the creative exploration of concatenative sound synthesis. The integrated audiovisual system allows a human performer to interact with the system dynamics while receiving visual and auditory feedback. The interaction happens indirectly by disturbing the rules governing the social relationships amongst the actors, which results in a wide range of dynamic spatiotemporal patterns. A user-performer thus improvises within the behavioral scope of the system while evaluating the apparent connections between parameter values and actual complexity of the system output.

2018

Thermographic Evaluation of the Saxophonists' Embouchure

Autores
Cerqueira, J; Clemente, MP; Bernardes, G; Van Twillert, H; Portela, A; Mendes, JG; Vasconcelos, M;

Publicação
VIPIMAGE 2017

Abstract
The orofacial complex is the primarily link between the instrument and the instrumentalist when performing the musician's embouchure. The contact point is established between the saxophonist lower lip, the upper maxillary dentition and the mouthpiece. The functional demands of the saxophone player and consequent application of forces with an excessive pressure can significantly influence the orofacial structures. A thermographic evaluation was performed to an anatomical zone vital for the embouchure, such as the lip of the saxophonist. Substantial temperature changes occurred before and after playing saxophone. The specificity of the embouchure regarding the position of the lower lip inside the oral cavity, the anatomy and position of the central lower incisors can be some of the factors involved in the origin of the existing temperature differences on the thermographic evaluation.

2017

A Hierarchical Harmonic Mixing Method

Autores
Bernardes, G; Davies, MEP; Guedes, C;

Publicação
Music Technology with Swing - 13th International Symposium, CMMR 2017, Matosinhos, Portugal, September 25-28, 2017, Revised Selected Papers

Abstract
We present a hierarchical harmonic mixing method for assisting users in the process of music mashup creation. Our main contributions are metrics for computing the harmonic compatibility between musical audio tracks at small- and large-scale structural levels, which combine and reassess existing perceptual relatedness (i.e., chroma vector similarity and key affinity) and dissonance-based approaches. Underpinning our harmonic compatibility metrics are harmonic indicators from the perceptually-motivated Tonal Interval Space, which we adapt to describe musical audio. An interactive visualization shows hierarchical harmonic compatibility viewpoints across all tracks in a large musical audio collection. An evaluation of our harmonic mixing method shows our adaption of the Tonal Interval Space robustly describes harmonic attributes of musical instrument sounds irrespective of timbral differences and demonstrates that the harmonic compatibility metrics comply with the principles embodied in Western tonal harmony to a greater extent than previous approaches. © 2018, Springer Nature Switzerland AG.

2019

A new classification of wind instruments: Orofacial considerations

Autores
Clemente, M; Mendes, J; Moreira, A; Bernardes, G; Van Twillert, H; Ferreira, A; Amarante, JM;

Publicação
Journal of Oral Biology and Craniofacial Research

Abstract
Background/objective: Playing a wind instrument implies rhythmic jaw movements where the embouchure applies forces with different directions and intensities towards the orofacial structures. These features are relevant when comparing the differences between a clarinettist and a saxophone player embouchure, independently to the fact that both belong to the single-reed instrument group, making therefore necessary to update the actual classification. Methods: Lateral cephalograms were taken to single-reed, double-reed and brass instrumentalists with the purpose of analyzing the relationship of the mouthpiece and the orofacial structures. Results: The comparison of the different wind instruments showed substantial differences. Therefore the authors purpose a new classification of wind instruments: Class 1 single-reed mouthpiece, division 1– clarinet, division 2 –saxophone; Class 2 double-reed instruments, division 1– oboe, division 2– bassoon; Class 3 cup-shaped mouthpiece, division 1– trumpet and French horn, division 2- trombone and tuba; Class 4 aperture mouthpieces, division 1– flute, division 2 – transversal flute and piccolo. Conclusions: Elements such as dental arches, teeth and lips, assume vital importance at a new nomenclature and classification of woodwind instruments that were in the past mainly classified by the type of mouthpiece and not taking into consideration its relationship with their neighboring structures. © 2019 Craniofacial Research Foundation

2018

DIGIT: A Digital Foley System to Generate Footstep Sounds

Autores
Aly, L; Penha, R; Bernardes, G;

Publicação
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Abstract
We present DIGItal sTeps (DIGIT), a system for assisting in the creation of footstep sounds in a post-production foley context—a practice that recreates all diegetic sounds for a moving image. The novelty behind DIGIT is the use of the acoustic (haptic) response of a gesture on a tangible interface as means for navigating and retrieving similar matches from a large database of annotated footstep sounds. While capturing the tactile expressiveness of the traditional sound foley practice in the exploration of physical objects, DIGIT streamlines the workflow of the audio post production environment for film or games by reducing its costly and time-consuming requirements. © 2018, Springer Nature Switzerland AG.

  • 2
  • 10