The 2nd International Conference on Timbre will be held 3–4 September 2020
in Thessaloniki, Greece as a virtual conference.
The study of timbre has recently gained a remarkable momentum. Following the Berlin Interdisciplinary Workshop on Timbre (2017) and the international conference Timbre is a Many-Splendored Thing (2018), the goal of Timbre 2020 is to continue a tradition of meetings around timbre.
Timbre poses multifaceted research questions at the intersection of psychology, musicology, acoustics, and cognitive neuroscience. Bringing together leading experts from these and related fields, Timbre 2020 aims to provide a truly interdisciplinary forum for exchanging novel perspectives and forging collaborations across different disciplines to help address challenges in our understanding of timbre from empirical, theoretical, and computational perspectives.
Four keynotes from distinguished experts will discuss timbre from a broad and complementary set of perspectives: Morwaread M. Farbood (New York University) on the role of timbre in music psychology, Jennifer Bizley (University College London) on the neural coding of timbre, David Howard (Royal Holloway University of London) on vocal timbre and Stefan Bilbao (University of Edinburgh) on acoustics of musical instruments and rooms.
Timbre 2020 is jointly organised by Asterios Zacharakis, Charalampos Saitis and Kai Siedenburg, with support from the School of Music Studies of the Aristotle University of Thessaloniki, the School of Electronic Engineering and Computer Science of the Queen Mary University of London, and the Department of Medical Physics and Acoustics of the University of Oldenburg.
Asterios Zacharakis, Aristotle University of Thessaloniki
Kai Siedenburg, University of Oldenburg
Charalampos Saitis, Queen Mary University of London
Asterios Zacharakis, Aristotle University of Thessaloniki
Konstantinos Pastiadis, Aristotle University of Thessaloniki
Emilios Cambouropoulos, Aristotle University of Thessaloniki
Computer science – Philippe Esling, IRCAM/Sorbonne Université
Composition – Denis Smalley, City, University of London (Emeritus)
Composition and theory – Jason Noble, McGill University
Ethnomusicology – Cornelia Fales, Indiana University
Music theory and analysis – Robert Hasegawa, McGill University
Musicology – Emily Dolan, Brown University
Neuroscience – Vinoo Alluri, International Institute of Information Technology
Popular music studies – Zachary Wallmark, University of Oregon
Psychoacoustics – Sven-Amin Lembke, De Montfort University
Psychology – Stephen McAdams, McGill University
Signal processing – Marcelo Caetano, McGill University
Sound recording – Joshua Reiss, Queen Mary University of London
Voice/Synthesis – David Howard, Royal Holloway University of London
Morwaread M. Farbood is an Associate Professor of Music Technology in the Department of Music and Performing Arts Professions New York University, where she is affiliated with the Music and Audio Research Laboratory (MARL) and the Max Planck/NYU Center for Language, Music, and Emotion (CLaME). Her research focuses primarily on the computational modeling of real-time aspects of music perception. She explores how emergent phenomena such as tonality and musical tension are perceived and how knowledge of these high-level aspects of music can be incorporated into software applications for facilitating musical creativity. Farbood received an A.B. from Harvard and S.M. and Ph.D. degrees from the Massachusetts Institute of Technology. She is the co-founder of the Northeast Music Cognition Group (NEMCOG), an organization that brings together music perception researchers in the Northeast Corridor region of the United States.
Jennifer Bizley is currently Professor of Auditory Neuroscience and a Wellcome Trust / Royal Society Sir Henry Dale Fellowship holder. She is based at the Ear Institute, University College London, where she established her lab in 2011. Prior to that she was a D.Phil student and post-doctoral researcher at the University of Oxford and an undergraduate at the University of Cambridge. Her work seeks to understand how the brain makes sense of sound, and in particular how auditory cortex facilitates listening in the noisy and complex everyday situations we experience. She combines behavioural testing in humans and animals, with observing and perturbing neural activity and computational modelling. Her work focuses on the representation of auditory and audiovisual signals in the auditory cortex.
Stefan Bilbao (B.A. Physics, Harvard, 1992, MSc., PhD Electrical Engineering, Stanford, 1996 and 2001 respectively) is currently Professor of Acoustics and Audio Signal Processing in the Acoustics and Audio Group at the University of Edinburgh, and previously held positions at the Sonic Arts Research Centre, at the Queen's University Belfast, and the Stanford Space Telecommunications and Radioscience Laboratory. He has led the NESS project (Next Generation Sound Synthesis) and WRAM project (Wave-based Room Acoustics Modeling), both funded by the European Research Council, and running jointly between the Acoustics and Audio Group and the Edinburgh Parallel Computing Centre at the University of Edinburgh between 2012 and 2018. He was born in Montreal, Quebec, Canada.
David Howard researches in the area of human voice production and has developed the Vocal Tract Organ for music performance as well as voice synthesis, including recreating the sound of a 3000-year-old Mummy in 2020. His PhD involved making an analogue fundamental frequency estimation device for cochlear implant users in the 1980s. Since then he has worked on tuning in a cappella (unaccompanied) choral singing, voice development in cathedral choristers and the use of AR and VR for storytelling.
Registration is now open.
Please fill in the online registration form.
Timbre 2020 will be presented through Zoom and the Zoom link will be emailed to all registered attendants one day prior to the conference.
Nikos Diminakis (musicologist, multi-instrumentalist, music educator, b.1981) is a PhD candidate of systematic musicology at the School of Music of the Aristotle University of Thessaloniki. He studied musicology at the Aristotle University (BMus) and received his saxophone diploma from the State Conservatory of Thessaloniki. His PhD research deals with the analysis of the “Nine Etudes for Saxophones in Four Books” by the French composer Christian Lauba. He has presented and published articles about music analysis in various conference proceedings. He is appointed as a music teacher in public elementary schools and since 2013 he has also been active in giving courses (Didjeridou, Harmony, Vocal and Aural Skills Training) at the School of Music of the Aristotle University. Since 2014 he tries to incorporate beatbox techniques in several instruments (mainly winds) through improvisations and his own compositions. Part of the above mentioned compositions were documented in Gina’s Georgiadou “Beatbox & Winds – Nikos Diminakis”, premiered during the “19th Thessaloniki’s Documentary Festival (2017)” and also presented in several other places of Greece and abroad (London, Amsterdam, Sydney, Chicago, etc.), as well as on the Greek national channel ERT3.
Matina Kalaitzidou is a (music educator, b.1989) is a PhD candidate of systematic musicology at the School of Music of the Aristotle University of Thessaloniki. She studied musicology at the Aristotle University (BMus) and received her piano diploma from the Municipal Conservatory of Kalamaria. From 2017 she teaches Vocal and Aural Skills Training at the School of Music of the Aristotle University.
Beatbox & Instruments is a musical performance, based on the performer’s original compositions, incorporating beatbox techniques in a constantly differentiated instrumentaltimbral environment.
Beatbox is a vocal technique standing for the box that produces the beat, meaning briefly the performer’s mouth and its sounds. Beatbox initially defined a historically recorded musical idiom that branched out of the American underground hip-hop movement of the 80s’. Through time it has grown to be a worldwide associated and dynamically evolving way of musical expression not only in the hip-hop culture but also in other musical genres like drum & bass, dub, dubstep, electro, techno, etc. Beatbox is also transforming rapidly into an umbrella term since it manages to combine a number of diverse sound production techniques of different ethnic groups in various periods of their cultural identity (i.e. Mongolian throat singing, eefing, etc.). It acts as a continually expanding depository of musical sounds (produced solely by the performer’s mouth) and thus reflects somehow the ongoing ancestral music process of experimenting with every potential environmental sound in order to incorporate them gradually in an art form.
1. Beastbox (didgeridoo & beatbox)
2. Tuluculu (toy melodica & beatbox)
This talk explores how timbre contributes to the perception of musical tension. Tension is an aggregate of a wide range of musical and auditory features and is a fundamental aspect of how listeners interpret and enjoy music. Timbre as a contributor to musical tension has received relatively little attention from an empirical perspective compared to other musical features such as melodic contour and harmony. The studies described here explore how common timbre descriptors contribute to tension perception. Multiple features including spectral centroid, inharmonicity, and roughness were examined through listener evaluations of tension in both artificially generated stimuli and electroacoustic works by well-known composers such as Stockhausen and Nono. Timbral tension was further examined in an audiovisual context by pairing electroacoustic compositions with abstract animations.
15.00 CET: Caitlyn Trevor, Luc Arnal and Sascha Frühholz | Scary music mimics alarming acoustic feature of screams Download
15.20 CET: Lena Heng and Stephen McAdams | Timbre’s function within a musical phrase in the perception of affective intents Download
15.40 CET: Maria Perevedentseva | Timbre and Affect in Electronic Dance Music Discourse Download
- Kai Siedenburg | Mapping the interrelation between spectral centroid and fundamental frequency for orchestral instrument sounds Download
- Sven-Amin Lembke | Sound-gesture identification in real-world sounds of varying timbral complexity Download
- Cyrus Vahidi, George Fazekas, Charalambos Saitis and Alessandro Palladini | Timbre Space Representation of a Subtractive Syntheziser Download
- Matt Collins | Timbral Threads: Compositional Strategies for Achieving Timbral Blend in Mixed Electroacoustic Music Download
- Lindsey Reymore | Timbre Trait Analysis: The Semantics of Instrumentation Download
- Christos Drouzas and Charalampos Saitis | Verbal Description of Musical Brightness Download
- Ivan Simurra, Patricia Vanzella and João Sato | Timbre and Visual Forms a crossmodal study relating acoustic features and the Bouba-Kiki Effect Download
- Gabrielle Choma | How Periodicity in Timbre Alters Our Perception of Time: An Analysis of “Prologue” by Gerard Grisey Download
- Ryan Anderson, Alyxandria Sundheimer and William Shofner | Cross-categorical discrimination of simple speech and music sounds based on timbral fidelity in musically experienced and naïve listeners Download
- Graeme Noble, Joanna Spyra and Matthew Woolhouse | Memory for Musical Key Distinguished by Timbre Download
- Harin Lee and Daniel Müllensiefen | A New Test for Measuring Individual’s Timbre Perception Ability Download
- Kaustuv Kanti Ganguli, Christos Plachouras, Sertan Şentürk, Andrew Eisenberg and Carlos Guedes| Mapping Timbre Space in Regional Music Collections using Harmonic-Percussive Source Separation (HPSS) Decomposition Download
17.00 CET: Ben Hayes and Charalampos Saitis | There’s more to timbre than musical instruments: semantic dimensions of FM sounds Download
17.20 CET: Bodo Winter and Marcus Perlman | Crossmodal language and onomatopoeia in descriptions of bird vocalization Download
17.40 CET: Permagnus Lindborg | Which timbral features granger-cause colour associations to music? Download
1. Urban Herder (double recorder & beatbox)
2. 4 Drops in my Flutebox (flute & beatbox)
3. Cyclo (flute & beatbox)
4. Harpbeat (mouth harp & beatbox)
19.00 CET: Speed dating
20.00 CET: Francesco Bigoni, Sofia Dahl and Michael Grossbach | Characterizing Subtle Timbre Effects of Drum Strokes Played with Different Technique Download
20.20 CET: Claudia Fritz | On the difficulty to relate the timbral qualities of a bowed-string instrument with its acoustic properties and construction parameters Download
20.40 CET: Joshua Albrecht | One singer, many voices: Distinctive within-singer groupings in Tom Waits Download
Sound synthesis and explorations of timbre have been intertwined at least as far back as Risset’s early experiments with additive synthesis in the 1960s. Particularly in the early days, there was a preoccupation with the notion of “natural” synthetic sound. As the thinking went, a good sound synthesis method should produce sound output with all the attributes of acoustically-produced sound. As Chowning wrote in 1973: “The synthesis of natural sounds has been elusive…” Physical modelling principles offer a partial remedy: natural synthetic sound is no longer elusive. And yet, physical models are posed in a way which makes any scientific exploration of the notion of timbre quite difficult. Physical modeling is a roundabout approach: physical models obey laws of physics and not human perception, and it is expected that any acoustic system that obeys the laws of physics will produce natural sound. As to why they produce natural sound---this is wrapped up in the physical parameters that define a particular model, which often do not relate directly to perceptual definitions of timbre (or even pitch or loudness). The aim of this talk is to give a short qualitative introduction to physical modelling synthesis and to explore, through both mathematical models and sound examples, the way engineers and musicians (and not scientists!) think about the notion of timbre.
1. Butterflies in Full Score or Butter Flies in Fool's Core (melodica & beatbox)
2. 8b (piano & beatbox)
3. 8a (piano & beatbox)
4. Inn F (piano & beatbox)
1. Clap Your Lips (selections II) (baritone saxophone & beatbox)
2. Al10 (alto-tenor saxophones & beatbox)
Timbre is a key perceptual feature of sound, that allows the listener to identify a sound source. Timbral differences enable the recognition of musical instruments and are critical for vowel perception in human speech. In this talk I will present recent work that has explored how the auditory cortex extracts and represents spectral timbre and how neural representations facilitate perceptual constancy. Perceptual constancy requires neural representations that are selective for object identity, but also tolerant across identity-preserving transformations. By combining behavioural testing in ferrets and humans, with neural recordings from the auditory cortex of ferrets actively discriminating sound timbre, we will demonstrate how cortical representations represent timbre across differences in pitch and location and robustly in the presence of background noise.
15.00 CET: Braden Maxwell, Johanna Fritzinger and Laurel Carney | Neural Mechanisms for Timbre: Spectral-Centroid Discrimination based on a Model of Midbrain Neurons Download
15.20 CET: Sarah Sauvé, Benjamin Rich Zendel and Jeremy Marozeau | Age and experience-related use of timbral auditory streaming cues Download
15.40 CET: Eddy Savvas Kazazis, Philippe Depalle and Stephen McAdams | Perceptual ratio scales of timbre-related audio descriptors Download
- Alejandro Delgado, Charalampos Saitis and Mark Sandler | Spectral and Temporal Timbral Cues of Vocal Imitations of Drum Sounds Download
- Islah Ali-MacLachlan, Edmund Hunt and Alastair Jamieson | Player recognition for traditional Irish flute recordings using K-nearest neighbour classification Download
- Thomas Chalkias and Konstantinos Pastiadis | Perceptual characteristics of spaces of music performance and listening Download
- Erica Huynh, Joël Bensoam and Stephen Mcadams | Perception of action and object categories in typical and atypical excitation-resonator interactions of musical instruments Download
- Carolina Espinoza, Alonso Arancibia, Gabriel Cartes and Claudio Falcón | New materials, new sounds: how metamaterials can change the timbre of musical instruments Download
- Antoine Caillon, Adrien Bitton and Brice Gatinet, Philippe Esling | Timbre Latent Space: Exploration and Creative Aspects Download
- Victor Rosi, Olivier Houix, Nicolas Misdariis and Patrick Susini | Uncovering the meaning of four semantic attributes of sound : Bright, Rough, Round and Warm Download
- Jake Patten and Michael McBeath | The difference between shrieks and shrugs: Spectral envelope correlates with changes in pitch and loudness Download
- Ivonne Michele Abondano Florez | Distorted Pieces of Something: A Compositional Approach to Luminance as a Timbral Dimension Download
- Asterios Zacharakis, Ben Hayes, Charalampos Saitis and Konstantinos Pastiadis | Evidence for timbre space robustness to an uncontrolled online stimulus presentation Download
- Kaustuv Kanti Ganguli, Akshay Anantapadmanabhan and Carlos Guedes | Questioning the Fundamental Problem-Definition of Mridangam Transcription Download
17.00 CET: Moe Touizrar and Kai Siedenburg | The medium is the message: Questioning the necessity of a syntax for timbre Download
17.20 CET: Didier Guigue and Charles de Paiva Santana | Orchestration and Drama in J.-P. Rameau Les Boréades
17.40 CET: Jason Noble, Kit Soden and Zachary Wallmark | The Semantics of Orchestration: A Corpus Analysis Download
1. In the Spectrum (tzouras & beatbox)
2. Hop it / Zbanayir (Kemencheridoo & beatbox)
3. Spitting Spiders (didgeridoo, piano & beatbox)
19.00 CET: Nathalie Herold | Towards a Theory and Analysis of Timbre based on Auditory Scene Analysis Principles: A Case Study of Beethoven’s Piano Sonata Op. 106, Third Movement Download
19.20 CET: Matthew Zeller | Klangfarbenmelodie in 1911: Anton Webern's Opp. 9 and 10 Download
19.40 CET: Felipe Pinto-d'Aguiar | Musical OOPArts: early emergences of timbral objects Download
Unaccompanied or ‘a cappella’ choral singing is a fine art when done to perfection that involves subtleties in tuning to achieve a high degree of consonance in tuning throughout. Timbre can influence pitch perception – an effect that is not directly obvious and this talk will explore some of the ways this can occur. In addition, it will consider the implications for the tuning of individual notes by singers and what the potential can be for pitch drift as well as audience appreciation of the overall tuning.
1. Compositions by qualia5
Each presenter/attendant can enter the ZOOM virtual room using the link provided in the instructions email.
The session will consist of 6 sub-sessions each of wich will last 8 minutes and will include 3-4 participants selected randomly. Each participant will be allocated with a max of 2 minutes self-presentation. Once you enter the first sub-session via the link provided in the instructions email the switching to the next sessions will be automated.
Separate virtual rooms hosting each poster can be accessed through the corresponding links included in the instructions email. Poster session presenters should join their rooms 10 minutes
before the session begins.
Gather Town is a virtual space where you can interact with other people in a more direct and informal way. The platform will be open during the breaks (18:00 - 19:00 CET) and the social events (22:00 - 23:00 CET). You can enter the two available virtual spaces (maximum of 50 people in each) via the links provided in the instructions email.
Organisers and participants can contact the technical support of the conference via the following skype account and the following email:
Skype Account Name: TIMBRE 2020
TIMBRE SKYPE Account will be online during of the conference.