As Covid-19 continues to have an impact around the globe, we have been looking for alternatives. With additional travel bans in place and no clear resolution to the whole situation in sight, it is becoming increasingly clear that a physical conference is no longer feasible.
Should we postpone to 2021? We originally chose 2020 because both ICMCP and ESCOM are due next year.
Should we postpone to 2022? We could, but we might lose momentum.
You guessed it right! Timbre 2020 is happening, we are going fully virtual! Submission format remains as is and proceedings will be published online as usual. We will have our keynotes, talks, posters, and socials over a period of two days. More detailed information will follow shortly. Stay tuned!
There will be limitations (think of all the amazing Greek food) but also unique opportunities (we will host a panel discussion on the pros and cons of running timbre experiments online). And we will get to see each other as planned and be timbre geeks. Mind you, Thessaloniki, we will be back.
Abstract submission deadline: May 22, 2020
Notification of acceptance: June 30, 2020
Camera ready paper submission deadline: July 17, 2020
KEEP CALM & TIMBRE ON – Asterios, Kai, Charis
The 2nd International Conference on Timbre will be held 3–4 September 2020
in Thessaloniki, Greece as a virtual conference.
The study of timbre has recently gained a remarkable momentum. Following the Berlin Interdisciplinary Workshop on Timbre (2017) and the international conference Timbre is a Many-Splendored Thing (2018), the goal of Timbre 2020 is to continue a tradition of meetings around timbre.
Timbre poses multifaceted research questions at the intersection of psychology, musicology, acoustics, and cognitive neuroscience. Bringing together leading experts from these and related fields, Timbre 2020 aims to provide a truly interdisciplinary forum for exchanging novel perspectives and forging collaborations across different disciplines to help address challenges in our understanding of timbre from empirical, theoretical, and computational perspectives.
Four keynotes from distinguished experts will discuss timbre from a broad and complementary set of perspectives: Morwaread M. Farbood (New York University) on the role of timbre in music psychology, Jennifer Bizley (University College London) on the neural coding of timbre, David Howard (Royal Holloway University of London) on vocal timbre and Stefan Bilbao (University of Edinburgh) on acoustics of musical instruments and rooms.
Timbre 2020 is jointly organised by Asterios Zacharakis, Kai Siedenburg, and Charalampos Saitis, with support from the School of Music Studies of the Aristotle University of Thessaloniki, the School of Electronic Engineering and Computer Science of the Queen Mary University of London, and the Department of Medical Physics and Acoustics of the University of Oldenburg.
Asterios Zacharakis, Aristotle University of Thessaloniki
Kai Siedenburg, University of Oldenburg
Charalampos Saitis, Queen Mary University of London
Asterios Zacharakis, Aristotle University of Thessaloniki
Konstantinos Pastiadis, Aristotle University of Thessaloniki
Emilios Cambouropoulos, Aristotle University of Thessaloniki
Acoustics – Stefan Weinzierl, Technische Universität Berlin
Computer science – Philippe Esling, IRCAM/Sorbonne Université
Composition – Denis Smalley, City, University of London (Emeritus)
Ethnomusicology – Cornelia Fales, Indiana University
Music theory and analysis – Robert Hasegawa, McGill University
Musicology – Emily Dolan, Brown University
Neuroscience – Vinoo Alluri, International Institute of Information Technology
Popular music studies – Zachary Wallmark, University of Oregon
Psychology – Stephen McAdams, McGill University
Signal processing – Marcelo Caetano, McGill University
Sound recording – Joshua Reiss, Queen Mary University of London
Voice/Synthesis – David Howard, Royal Holloway University of London
Morwaread M. Farbood is an Associate Professor of Music Technology in the Department of Music and Performing Arts Professions New York University, where she affiliated with the Music and Audio Research Laboratory (MARL) and Max Planck/NYU Center for Language, Music, and Emotion (CLaME). Her research focuses primarily on the computational modeling of real-time aspects of music perception. She explores how emergent phenomena such as tonality and musical tension are perceived and how knowledge of these high-level aspects of music can be incorporated into software applications for facilitating musical creativity. Farbood received an A.B. from Harvard and S.M. and Ph.D. degrees from the Massachusetts Institute of Technology. She is the co-founder of the Northeast Music Cognition Group (NEMCOG), an organization that brings together music perception researchers in the Northeast Corridor region of the United States.
Jennifer Bizley is currently Professor of Auditory Neuroscience and a Wellcome Trust / Royal Society Sir Henry Dale Fellowship holder. She is based at the Ear Institute, University College London, where she established her lab in 2011. Prior to that she was a D.Phil student and post-doctoral researcher at the University of Oxford and an undergraduate at the University of Cambridge. Her work seeks to understand how the brain makes sense of sound, and in particular how auditory cortex facilitates listening in the noisy and complex everyday situations we experience. She combines behavioural testing in humans and animals, with observing and perturbing neural activity and computational modelling. Her work focuses on the representation of auditory and audiovisual signals in the auditory cortex.
Stefan Bilbao (B.A. Physics, Harvard, 1992, MSc., PhD Electrical Engineering, Stanford, 1996 and 2001 respectively) is currently Professor of Acoustics and Audio Signal Processing in the Acoustics and Audio Group at the University of Edinburgh, and previously held positions at the Sonic Arts Research Centre, at the Queen's University Belfast, and the Stanford Space Telecommunications and Radioscience Laboratory. He has led the NESS project (Next Generation Sound Synthesis) and WRAM project (Wave-based Room Acoustics Modeling), both funded by the European Research Council, and running jointly between the Acoustics and Audio Group and the Edinburgh Parallel Computing Centre at the University of Edinburgh between 2012 and 2018. He was born in Montreal, Quebec, Canada.
David Howard researches in the area of human voice production and has developed the Vocal Tract Organ for music performance as well as voice synthesis, including recreating the sound of a 3000-year-old Mummy in 2020. His PhD involved making an analogue fundamental frequency estimation device for cochlear implant users in the 1980s. Since then he has worked on tuning in a cappella (unaccompanied) choral singing, voice development in cathedral choristers and the use of AR and VR for storytelling.
Detailed information on registration will follow soon.
Morning: 2nd Keynote and talk sessions
Afternoon: talk and poster sessions, 3rd Keynote
Evening: concert and conference dinner
Morning: 4th Keynote and talk sessions
Afternoon: talk and poster sessions, closing remarks