POPULARITY
Come listen to a WUU service! A service woven of 7 strands: the songs of John Chowning, the piano music of David Hamilton, the poetry of Kim McHugh, a From the Heart by Donna Stanford, a contemplation on the natural world, crystal singing bowls played by Liz Wiley, and a brief moment from our WUU history Welcome, Greeting, Nan Hart Worship Associate Announcements, Liz Wiley Chris Mooney, Music Director Heidi Sousa, piano David Hamilton, piano Liz Wiley, crystal singing bowls John Chowning, guitar Thank you for listening. For more information about the Williamsburg Unitarian Universalists, or to join us on Sunday mornings, visit www.wuu.org. Permission to reprint, podcast, and/or stream the music in this service obtained from ONE LICENSE with license #A-735438. All rights reserved.
durée : 01:00:07 - Documentaire : John Chowning, un portrait - par : François Bonnet - Pionnier de la synthèse sonore par modulation de fréquence, John Chowning a révolutionné la musique électronique en alliant recherche scientifique et création artistique, notamment à Stanford et en lien avec l'Ircam. - réalisé par : Alexandre Bazin
Episode 146 Chapter 07, Computer Music Basics. Works Recommended from my book, Electronic and Experimental Music Welcome to the Archive of Electronic Music. This is Thom Holmes. This podcast is produced as a companion to my book, Electronic and Experimental Music, published by Routledge. Each of these episodes corresponds to a chapter in the text and an associated list of recommended works, also called Listen in the text. They provide listening examples of vintage electronic works featured in the text. The works themselves can be enjoyed without the book and I hope that they stand as a chronological survey of important works in the history of electronic music. Be sure to tune-in to other episodes of the podcast where we explore a wide range of electronic music in many styles and genres, all drawn from my archive of vintage recordings. There is a complete playlist for this episode on the website for the podcast. Let's get started with the listening guide to Chapter 07, Computer Music Basics from my book Electronic and Experimental music. Playlist: Early Computer Synthesis Time Track Time* Start Introduction –Thom Holmes 01:30 00:00 1 Max Mathews, “Numerology” (1960). Direct computer synthesis using an IBM 7090 mainframe computer and the Music III programming language 02:45 01:32 2 James Tenney, “Analog #1: Noise Study” (1961). Direct synthesis and filtering of noise bands at Bell Labs' facilities. 04:24 04:04 3 Lejaren Hiller, “Computer Cantata” (third movement) (1963). Direct computer synthesis using an IBM 7094 mainframe computer and the Musicomp programming language. 05:41 08:28 4 Jean-Claude Risset, “Mutations I” (1969). Used frequency modulation. 10:23 14:06 5 Charles Dodge, “The Earth's Magnetic Field” (Untitled, part 1) (1970). Used an IBM mainframe computer and the Music 4BF programming language to convert geophysical data regarding the Earth's magnetic field into music. 14:00 24:28 6 Laurie Spiegel, “Appalachian Grove I” (1974). Used the Groove program at Bell Labs. 05:23 38:22 7 Curtis Roads, “Prototype” (1975). Used granular synthesis. 06:11 43:48 8 John Chowning, “Stria” (1977). Used the composer's patented FM synthesis algorithms. 05:14 50:00 9 Jean-Baptiste Barriere, “Chreode” (1983). Granular synthesis using the Chant program at IRCAM; computer-controlled organization of material—a grammar of musical processes prepared with IRCAM's Formes software. 09:24 55:10 10 Barry Truax, “Riverrun” (1986). Composed using only granulated sampled sound, using Truax's real-time PODX system. 19:42 01:04:30 Additional opening, closing, and other incidental music by Thom Holmes. My Books/eBooks: Electronic and Experimental Music, sixth edition, Routledge 2020. Also, Sound Art: Concepts and Practices, first edition, Routledge 2022. See my companion blog that I write for the Bob Moog Foundation. For a transcript, please see my blog, Noise and Notations. Original music by Thom Holmes can be found on iTunes and Bandcamp.
Vi pratar lite musikalisk inspiration med en lätt förvirrad / jetlaggad Jouni och så har Niklas äntligen fått tummen ur och förkovrat sig tillräckligt i algoritmiska reverb för att kunna prata lite om det. Och en del om fas. Fasligt mycket fas faktiskt. Häng med!Länklista:1.) Nick Cave & The Bad Seeds - https://www.nickcave.com/2.) EMT-250 - https://www.vintagedigital.com.au/emt-250-digital-reverb/3.) John Chowning - https://hub.yamaha.com/keyboards/synthesizers/discovering-digital-fm-john-chowning-remembers/4.) Schroeder - https://ccrma.stanford.edu/~jos/pasp/Schroeder_Reverberator_called_JCRev.html5.) Mera om Schroeder hos Valhalla - https://www.vintagedigital.com.au/emt-250-digital-reverb/6.) Valhalla, Reverb Design - https://valhalladsp.com/2021/09/23/getting-started-with-reverb-design-part-3-online-resources/7.) Venus Theory - https://www.youtube.com/watch?v=7hEw9tIztzY8.) Niklas högpass/lågpass film - https://www.youtube.com/watch?v=cVxP4lFh4OI9.) Allpass filter - https://www.abletonlessons.com/music-production-tips-and-tricks/understanding-all-pass-filters-a-comprehensive-guide Hosted on Acast. See acast.com/privacy for more information.
Los algoritmos han sido usados para componer música durante siglos: están presentes desde los primeros autómatas musicales del siglo XVIII y XIX hasta el ámbito del live coding o codificación en vivo en la actualidad._____Has escuchadoManitutshu (A New Algorithm)... LatelyBass and NewElectro, Attack Pulse Hat / Mark Fell. Editions Mego (2011)Neural Synthesis No. 9 (1994) / David Tudor. Lovely Music (1995)Sesión y presentación de Jesús Jara sobre Live Coding (2023). Audio en directo de la actividad celebrada durante la presentación de “La Biblioteca invita. Nuevas músicas. Live Coding” en la Fundación Juan March de Madrid, el 8 de marzo de 2023Turenas (1972) / John Chowning. [Creada en Center for Computer Research in Music and Acoustics de Stanford]. WERGO (1988) _____Selección bibliográficaAMES, Charles, “Automated Composition in Retrospect: 1956-1986”. Leonardo, vol. 20, n.º 2 (1987), pp. 169-185*COLLINS, Nick, “Live Coding of Consequence”. Leonardo, vol. 44, n.º 3 (2011), pp. 207-230*HARLEY, James, “Generative Processes in Algorithmic Composition: Chaos and Music”. Leonardo, vol. 28, n.º 3 (1995), pp. 221-224*HOLMES, Thom, Electronic and Experimental Music: Technology, Music, and Culture. Routledge, 2008*LEDESMA, Eduardo, “The Poetics and Politics of Computer Code in Latin America: Codework, Code Art, and Live Coding”. Revista de Estudios Hispánicos, vol. 49, n.º 1 (2015), pp. 91-120LEWIS, George E., “Too Many Notes: Computers, Complexity, and Culture in Voyager”. Leonardo Music Journal, vol. 10 (2000), pp. 33-39*MAGNUSSON, Thor, “Algorithms as Scores: Coding Live Music”. Leonardo Music Journal, vol. 21 (2011), pp. 19-23*MCLEAN, Alex y Roger T. Dean (eds.), The Oxford Handbook of Algorithmic Music. Oxford University Press, 2018*MONTILLA, César, El uso de algoritmos y computadoras para crear música. Página oficial del compositor, consultada el 20 de junio de 2023: [PDF] *Documento disponible para su consulta en la Sala de Nuevas Músicas de la Biblioteca y Centro de Apoyo a la Investigación de la Fundación Juan March
In May of 1983, the world of synthesizers and electronic music as we knew it would change forever with the launch of the Yamaha DX7. To celebrate 40 years since its launch, Rob Puricelli spoke to Dr John Chowning, the developer of FM synthesis, Dave Bristow and Gary Leuenberger, sound designers for the original DX7 and Manny Fernandez, who has worked on all Yamaha's FM projects from the Mk.II DX7 through to today's Montage M series.See the Show Notes for further details.Chapters00:00 - Introduction01:55 - First Experiences Of The DX712:49 - Did The DX7 Meet Expectations?16:57 - The Feedback Loop17:51 - Creating And Sharing Sounds22:47 - A Career From Creating Patches27:55 - Sound Design Using FM31:36 - Hearing Your Own Sounds34:26 - Working With Don Lewis44:26 - Demonstrating The DX757:00 - FM Synthesis 40 Years On01:07:12 - Formant Shaping And The Future Of FMDr John Chowning BiogBorn in Salem, New Jersey in 1934, John Chowning spent his school years in Wilmington, Delaware. Following military service and four years at Wittenberg University in Ohio, he studied composition in Paris with Nadia Boulanger. He received a doctorate in composition (DMA) from Stanford University in 1966, where he studied with Leland Smith. Chowning discovered the frequency modulation synthesis (FM) algorithm in 1967. This breakthrough in the synthesis of timbres allowed a very simple yet elegant way of creating and controlling time-varying spectra. In 1973 Stanford University licensed the FM synthesis patent to Yamaha in Japan, leading to the most successful synthesis engine in the history of electronic musical instruments.He taught computer sound synthesis and composition at Stanford University's Department of Music. In 1974, with John Grey, James (Andy) Moorer, Loren Rush and Leland Smith, he founded the Center for Computer Research in Music and Acoustics (CCRMA), which remains one of the leading centres for computer music and related research. Although he retired in 1996, he has remained in contact with CCRMA activities.Chowning was elected to the American Academy of Arts and Sciences in 1988 and awarded the Honorary Doctor of Music by Wittenberg University in 1990. The French Ministre de la Culture awarded him the Diplôme d'Officier dans l'Ordre des Arts et Lettres in 1995. He was given the Doctorat Honoris Causa in 2002 by the Université de la Méditerranée, by Queen's University in 2010, Hamburg University in 2016, and Laureate of the Giga-Hertz-Award in 2013.Dave Bristow BiogDave was born in London and worked as a professional keyboard player recording and touring internationally with a variety of artists including Polyphony, Slender Loris, June Tabor, Tallis and 2nd Vision. Active in synthesizer development, he played a central role in voicing the Yamaha DX7 synthesizer and is internationally recognized as one of the important contributors to the development and voicing of FM synthesis, co-authoring a textbook on the subject with Dr John Chowning.He spent three years at IRCAM in Paris, running a MIDI and synthesis studio working with contemporary music composers and artists, then moving to the United States in the 1990's to work for Emu Systems, Inc. on sampling and filter-based synthesizers. In 2002, he began working again with Yamaha developing ringtones and system alert sounds for the SMAF audio chip series used in cell phones and mobile devices.He has been an instructor at Shoreline Community College teaching electronic music production and synthesis for ten years, but still finds plenty of time for composing and playing piano with RedShift jazz quartet and developing his interest in computer arts.Gary Leuenberger BiogGary started in music at a young age and, in 1975, founded G. Leuenberger & Co. in San Francisco. It soon became one of the world's largest retailers of pianos, synthesizers and electronic keyboards. In 1980 he started working with Yamaha as part of their product development team. It was through this that he was recruited, along with the likes of Dave Bristow and Don Lewis, to create the factory presets for the DX7. Gary's most famous, or infamous, patch was the legendary E.Piano 1 which became equally one of the most popular and despised sounds ever! Nevertheless, his association with Yamaha continued until 2000, at which point Gary went back into education, gaining his Bachelors of Music and Masters in Classical Piano Performance from San Francisco State University in 2007.Since then, he has taught electronic music at SFSU and gives private tutoring to budding musicians of all ages. Manny Fernandez BiogDr. Manny Fernandez has been involved in synthesizer programming and development with many manufacturers for over 35 years. Initially self-taught prior to traditional university study of analogue synthesis, in the late 1970's - early 1980's the emerging digital synthesis techniques caught his attention with their expanded timbral possibilities.He acquired a DX7 in the fall of 1983 and using Dr. Chowning's original academic articles as a guide began exploring FM synthesis in depth. In 1987 he began his relationship with Yamaha, programming for a wide range of their synthesizers through the years to the current Montage M. Acknowledged as one of the world's foremost FM synthesists and having extensive experience with physical modelling synthesis as well, his programming approach is to create unique and dynamic timbres with interesting yet useful real-time controller implementations.Rob Puricelli BiogRob Puricelli is a Music Technologist and Instructional Designer who has a healthy obsession with classic synthesizers and their history. In conjunction with former Fairlight Studio Manager, Peter Wielk, he fixes and restores Fairlight CMI's so that they can enjoy prolonged and productive lives with new owners. He also writes reviews and articles for Sound On Sound, his website Failed Muso, and other music-related publications, as well as hosting a weekly livestream on YouTube for the Pro Synth Network and guesting on numerous music technology podcasts and shows. He also works alongside a number of manufacturers, demonstrating their products and lecturing at various educational and vocational establishments about music technology.www.failedmuso.comTwitter: @failedmusoInstagram: @failedmusoFacebook: https://www.facebook.com/failedmuso/
This episode is sponsored by Berlin-based pro-audio company HOLOPLOT, which features the multi-award-winning X1 Matrix Array. X1 is software-driven, combining 3D Audio-Beamforming and Wave Field Synthesis to achieve authentic sound localisation and complete control over sound in both the vertical and horizontal axes. HOLOPLOT is pivoting the revolution in sound control, enabling the positioning of virtual loudspeakers within a space, allowing for a completely new way of designing and experiencing immersive audio on a large scale. To find more, visit holoplot.com. In this episode of the Immersive Audio Podcast, Monica Bolles is joined by the musician and Senior Sound Technologist at Meow Wolf - Les Stuck from New Mexico, US. Les began working in spatial audio while working for the Ensemble Modern and the Frankfurt Ballet in Frankfurt, Germany. He designed the touring six-channel sound system for Frank Zappa's Yellow Shark Tour, which included a 6-channel ring microphone. He then worked at IRCAM in Paris, where he built several spatializers in Max/FTS - a 6-channel version for Pierre Boulez's ...explosante-fixe... premiere, an unusual 8-channel version specifically adapted to classical opera houses for Philippe Manoury's opera 60e Parallèle, and a signal-controlled panner that allowed extremely fast movement. He designed a 7-channel sound system at Mills College that featured an overhead speaker and built a variety of spatializers for students and guest composers. To celebrate the 50th anniversary of John Chowning's seminal work on the digital simulation of sound spatialization, Les realized a version of his algorithm for release with Max/MSP in 2021, including panned reverb and the Doppler effect, all controlled at signal rate. Currently Les works at Meow Wolf, where he designs interactive sound installations and acoustical treatments. He has developed several spatial plugins for Ableton Live, which typically include a binaural output to preview the results in headphones before going on-site. He led a collaboration with Spatial, Inc for Meow Wolf's installation at South by Southwest, and did extensive testing of Holoplot speakers for a future Meow Wolf project. Les talks about his extensive career, working with spatial audio since the 1980s, including projects with Frank Zappa, IRCAM, Cycling74, and we dive into the topic of interactive spatial audio for physical installations. This episode was produced by Oliver Kadel and Emma Rees and included music by Rhythm Scott. For extended show notes and more information on this episode go to https://immersiveaudiopodcast.com/episode-82-les-stuck-meow-wolf/ If you enjoy the podcast and would like to show your support, please consider becoming a Patreon. Not only are you supporting us, but you will also get special access to bonus content and much more. Find out more on our official Patreon page - www.patreon.com/immersiveaudiopodcast We thank you kindly in advance! We want to hear from you! We value our community and would appreciate it if you would take our very quick survey and help us make the Immersive Audio Podcast even better: surveymonkey.co.uk/r/3Y9B2MJ Thank you! You can follow the podcast on Twitter @IAudioPodcast for regular updates and content or get in touch via podcast@1618digital.com immersiveaudiopodcast.com
Daria Semegen's chamber, orchestral, vocal and electronic music with dance and film tends toward the experimental. Her first score for instruments and musique concrète tape is from 1965, followed by tour de force electronic music works she crafted with classic analog studio techniques. Her work was featured in articles, books and in E. Hinkle-Turner's 1991 doctoral dissertation at Univ. of Illinois. In 1995 her music was the subject of an international seminar at King's College, Univ. of London. Semegen was honored along with digital music trailblazers Jean-Claude Risset and John Chowning at an international electroacoustic music conference in 2015. Alt-music's Forced Exposure called her electronic works “heterodyning stereo-heavy monsters as vital as any of the GRM/STEIM/SECAM Darmstadt output.” Critic DJ Spooky wrote, “the dynamic range of sounds is absolutely refreshing, flying in the face of what's been going on in contemporary music culture.” Music: Vignette by Daria Semegen, performed by Cathy Callis Recording of Arc, by Daria Semegen website Co-hosts: Joseph Bohigian and Niloufar Nourbakhsh Follow us on Facebook, Instagram, and Twitter. ensembledecipher.com Contact us at decipherists@ensembledecipher.com. Decipher This! is produced by Joseph Bohigian; intro sounds by Eric Lemmon; outro music toy_3 by Eric Lemmon.
The introduction of the synthesizer brought with it a many new types of sounds and a new way of thinking when it came to creating instruments. This week on The Music History Project, hear from some of the legendary synth pioneers that creatively and scientifically invented some of the most iconic sounds in recent popular music including Don Buchla, John Chowning, Suzanne Ciani, Bob Moog and Malcolm Cecil.
PEG Interviews Composer Producer Inventor Podcaster Kevin Stratton The man behind “Definitive Music” Unique abilities have given Kevin a reputation for creating a definitive sound that allows artists and productions to stand apart. His commitment for developing sound that is fresh and innovative has made him a sought-after producer and composer in today's music. This passion has driven him through four Grammy-winning albums developing the unique sounds for such artists as Chicago, Toto, Stevie Wonder, Thomas Dolby, Van Halen and many others. Recently, Kevin has written and produced music for HBO Productions, Netflix, Amazon Studios and has played on numerous tracks for Guitar Hero and Rock Revolution. Currently he is engaged with a number of national symphony orchestras for scoring and film work. His work with mobile entertainment productions and his background in 3D animation has also afforded him work as a producer with such clients as; Pixar, Electronic Arts (EA) and Disney. Kevin began his film and television work on a number of film projects with Frank Serafine of Serafine FX (Santa Monica, CA.) including the development and design for the movie soundtrack of the motion picture “Nightfall” ( based on the novel by Isaac Asimov ) and sound design on films such as “Short Circuit II” and “Star Trek IV The Voyage Home” in which he received screen and production credits.. Today, Kevin is active in numerous TV, Film and Special Media projects. Kevin is a published author for a number of trade magazines and technical writings. Including; Audio Engineering Society, Electronic Musician and Keyboard Magazine. Kevin studied sound synthesis and acoustic physics under the tutelage of Stanford University's Dr. John Chowning and others, at the University of Chicago, and was awarded a scholarship to the prestigious Berkeley School of Music in Boston. Mr. Stratton was also honored to teach music composition and sound design at the Aspen Music Festival. Kevin's work also supports Charities including The Cancer Society, The Grammy Foundation and Music Cares Foundation. Building off of his work over the past three decades, Kevin has brought his skills to fruition through artist management, successful albums, sound design, as well as music supervision and placement in TV and film. In the end…those he works with say more than anything else… https://kevinstratton.com/ Watch the Podcast on YouTube here: https://www.youtube.com/watch?v=r2pqwok3fZo Watch on Twitch https://www.twitch.tv/videos/1053468290 --- Send in a voice message: https://anchor.fm/phantom-electric/message Support this podcast: https://anchor.fm/phantom-electric/support
PEG Interviews Composer Producer Inventor Podcaster Kevin Stratton The man behind “Definitive Music” Unique abilities have given Kevin a reputation for creating a definitive sound that allows artists and productions to stand apart. His commitment for developing sound that is fresh and innovative has made him a sought-after producer and composer in today's music. This passion has driven him through four Grammy-winning albums developing the unique sounds for such artists as Chicago, Toto, Stevie Wonder, Thomas Dolby, Van Halen and many others. Recently, Kevin has written and produced music for HBO Productions, Netflix, Amazon Studios and has played on numerous tracks for Guitar Hero and Rock Revolution. Currently he is engaged with a number of national symphony orchestras for scoring and film work. His work with mobile entertainment productions and his background in 3D animation has also afforded him work as a producer with such clients as; Pixar, Electronic Arts (EA) and Disney. Kevin began his film and television work on a number of film projects with Frank Serafine of Serafine FX (Santa Monica, CA.) including the development and design for the movie soundtrack of the motion picture “Nightfall” ( based on the novel by Isaac Asimov ) and sound design on films such as “Short Circuit II” and “Star Trek IV The Voyage Home” in which he received screen and production credits.. Today, Kevin is active in numerous TV, Film and Special Media projects. Kevin is a published author for a number of trade magazines and technical writings. Including; Audio Engineering Society, Electronic Musician and Keyboard Magazine. Kevin studied sound synthesis and acoustic physics under the tutelage of Stanford University's Dr. John Chowning and others, at the University of Chicago, and was awarded a scholarship to the prestigious Berkeley School of Music in Boston. Mr. Stratton was also honored to teach music composition and sound design at the Aspen Music Festival. Kevin's work also supports Charities including The Cancer Society, The Grammy Foundation and Music Cares Foundation. Building off of his work over the past three decades, Kevin has brought his skills to fruition through artist management, successful albums, sound design, as well as music supervision and placement in TV and film. In the end…those he works with say more than anything else… https://kevinstratton.com/ Watch the Podcast on YouTube here: https://www.youtube.com/watch?v=r2pqwok3fZo Watch on Twitch https://www.twitch.tv/videos/1053468290 --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/josephine-electric/message Support this podcast: https://anchor.fm/josephine-electric/support
Mark Fell is a computer musician and artist based in Rotherham. We chat about green tea, collaboration, algorithmic processes, remote performance during quarantine, fancy pliers, whether he or his son Rian Treanor is cooking dinner, working with John Chowning, and a lot more. I've been following Mark's work for years I'm very grateful for his generosity with him time. If you want some guidance in terms of where to start with his discography, I'd highly suggest:snd - AtavismMultistabilityanything from Sensate Focus
John Chowning means FM synthesis to everyone in the audio community worldwide. But the man is no less extraordinary than his discovery. Co-founder of one of the most important centres for music research in the world, CCRMA (Center for Computer Research in Music and Acoustics) at Stanford University, John speaks about his approach to composition, and a lifelong quest for the "artistic gesture."
Il 19 aprile 1977 John Chowning brevetta un metodo di sintesi del suono con FM. Paolo Tortiglione lo ha raccontato a WikiMusic
We begin a multi-part look at the Center for Computer Research in Music and Acoustics (pronounced Karma) by looking at it's birth within the Stanford Artificial Intelligence Lab (SAIL), the DC Power Lab (in the hills above Stanford), and the work of John Chowning and many others!You can support Engineers & Enthusiasts through our Patreon - https://www.patreon.com/3MinModernistFind out more at https://engineers--enthusiasts.pinecast.co
If you have any interest in the relationship between church and politics, don't miss this episode with the man who has done both for the majority of his career.
Mit unserem Gehör können wir Geräusche unmittelbar orten und identifizieren. Um diese Fähigkeit sinnvoll im Projekt nutzen zu können, gibt uns Dr. Paul Modler einen Einblick in Raumklang. Die Abteilung Medienkunst Akustik (MK Akustik) der Staatlichen Hochschule für Gestaltung (HfG) in Karlsruhe befasst sich mit elektronischer und elektroakustischer Musik, Klanginstallation und Sonifikation. Sie wird von Dr. Paul Modler geleitet, der uns in diesem Gespräch einen Einblick in Raumakustik und Techniken für räumliches Hörempfinden über Kopfhörer geben konnte. Paul Modler ist gerade von einem Besuch der Ars Electronica in Linz zurückgekehrt. Ein hervorgehobenes Event des Festivals der elektronischen Künsten war die Klangwolke einer Story mit Feuerwerk, Maschinen, Jets und Booten auf der Donau. Der Wettbewerb Prix Ars Electronica gab einen Einblick, welche aktuellen Richtungen die durchaus diskutierte Medienkunst darbietet. Nach seinem Diplom in den Ingenieurwissenschaften an der ehemaligen Universität Karlsruhe (jetzt Karlsruher Institut für Technologie (KIT)) zur Signalverarbeitung und Filterentwurf des Waveterm Synthesizer der Palm Products GmbH (PPG), gelangte Paul Modler an die University of York, wo er im Bereich der Music Technology promovierte und von dort an die Hochschule für Gestaltung in die Medienkunst geworben wurde. Seine Forschungsinteressen gehen auch in Richtung des Mehrkanaltons, insbesondere im Verfahren der Ambisonics, das nach langer Durststrecke inzwischen sogar als Raumklangformat bei YouTube Einzug gehalten hat. Die MK Sound setzt sich mit der Frage der Musikerstellung, der Definition und möglichen Instrumenten sowie der Technik, Installation und Performance in einem sehr breiten Spektrum interdisziplinär auseinander. Es gibt Lehrveranstaltungen zur analogen Tonerzeugung, wie auch die Auseinandersetzung mit neuen digitalen Einflüssen und die Abbildung analoger Synthesizern auf mobilen Geräten wie bei Korg. Die Gruppe wird auch von besuchenden Künstlern wie John Richards in Richtung Circuit Bending inspiriert. Dies führt zu faszinierenden Abschlussarbeiten wie den Atmospheric Disturbances von Lorenz Schwarz, wo Raumklang mit Plasmalautprechern künstlerisch umgesetzt wurde. Interessante Impulse entstehen auch aus der Zusammenarbeit mit weiteren Instituten und Hochschulen: So beteiligen sich auch oft Studierende des KIT an Projekten. Die Aufnahme fand im Studio 311 der MK Sound statt, wo die Gruppe einen mobilen Klangdom installiert hat, um an ambisonischen Verfahren zu arbeiten und ihn musikalisch zu nutzen. Zur Ansteuerung kommt hier die Software Zirkonium wie auch die Software des Institut de Recherche et Coordination Acoustique/Musique (IRCAM) „Spat“ zum Einsatz, sowie andere verfügbare Verräumlichungstools. Ein Aspekt ist dabei auch der Wandel der Sicht auf den Lautsprecher vom Mittel zum Zweck hin zu einem eigenständigen Musikinstrument. Die Hochschule für Gestaltung in Karlsruhe ist eingerahmt und im gleichen Haus wie das Museum für neue Kunst und das ZKM – Zentrum für Kunst und Medien und Medienmuseum. So arbeitet die MK Sound natürlich eng mit dem von Prof. Ludger Brümmer geleiteten Institut für Musik und Akustik am ZKM zusammen. Das Institut bietet insbesondere auch der Diskussion musikalisch digitalen elektroakustischen Bereich eine Plattform und hat mit dem Klangdom im ZKM Kubus eine etablierte Referenzplattform für Raumklang. Zusammen mit der HfG wurde dazu auch 2015 das inSonic Festival zu Raumklang ausgerichtet, das sich im inSonic Festival Dezember 2017 wiederholt. Die große Bandbreite des Instituts zeigt sich auch in häufigen Kraftwerk-Konzerten bis hin zu häufigen Linux Audio Konferenzen. Der ehemalige Kraftwerk-Musiker Florian Schneider-Esleben war auch 1998 als Professor für Medienkunst und Performance an die HfG berufen. Ende letzten Jahres fand am Institut auch das Strömungen Symposium zu künstlerischer Sonifikation statt. Durch unser Gehör und Körper nehmen wir Schallwellen wahr, soweit sich diese etwa im Hörbereich von etwa 20-20kHz und einem davon abhängigen Pegel befindet. Assoziieren wir einen Sinn oder gewisse Ästhetik in ein Geräusch, so mögen wir es als Klang bezeichnen, der Teil einer Musik sein kann. Ein Teil der Akustikempfindung wird in der Psychoakustik beschrieben, die auch sehr exakt mit der Hörbarkeit von Geräuschen und Auswirkung von Wahrnehmungen auf den Menschen analysiert. Diese Analyse hat erst den Erfolgszug der verlustbehafteten Audiokompression möglich gemacht. Für die Aufnahme von Raumklang spielt die Positionierung der Mikrofone eine besondere Rolle: Da eine Aufnahme aus allen Richtungen an einem Punkt nicht möglich ist, müssen Mikrofone mit gewissen Abstand von einander positioniert werden, wodurch der Raum diskretisiert wird. Besonders beispielhaft für die Auswirkung der Diskretisierung sind Werke von John Chowning, der die Frequenzmodulations-Synthese aus der Raumklangforschung heraus für Synthesizer patentierte. Hier erhält man an leicht unterschiedlichen Positionen mit klassischem Soundfeld Mikrofon oder mit Ambeo VR Mikrofon ein völlig anderes Konzerterlebnis. Im Rahmen einer Stereoaufnahme und -reproduktion durch Lautsprecher entstehen Phantomschallquellen um die Lautsprecher, soweit man sich exakt im Sweet Spot des Stereodreiecks befindet. Empirisch zeigt sich, dass die Verwendung von zusätzlich an die Wand gedrehten Treibern, wie beim Acoustimass-System ein immersiveres Stereoempfinden erzeugt wird. Das räumliche Empfinden im Kopf entsteht zunächst durch Intensitäts- oder Pegelunterschiede und Laufzeitunterschieden zwischen den Ohren, die vom Gehirn rekonstruiert und die virtuelle Position der Schallquellen rekonstruiert wird. Sehr individuell spielt aber auch die Kopf- und Körperform eine große Rolle, denn je nach Kopfgröße sind die Ohren unterschiedlich weit voneinander entfernt, die Ohrmuschel unterschiedlich geformt und die Schultern unterschiedlich weit entfernt. Dadurch ergeben sich eine durch frequenzabhängige Intensitäts- und Laufzeitsunterschiede resultierende Filterung, die als Head-Related Transfer Function (HRTF) bzw. Kopfübertragungsfunktion bezeichnet wird. Die Berücksichtigung dieser Abbildung führt zur binauralen Aufnahme und Reproduktion. Eine weitere Wahrnehmungsmöglichkeit ist der Raumschall, wo eine räumliche Wahrnehmung durch die Beziehung zum Raum ermöglicht wird. Daher muss man in der Stereofonie deutlich zwischen Lautsprecheraufnahmen und Kopfhöreraufnahmen unterscheiden, da die Reproduktion über Kopfhörer die Berücksichtigung der Kopfübertragungsfunktion erforderlich ist. Der Weg zu Mehrkanal-Tonsystemen führte von der Stereofonie zunächst zur Quadrofonie für Systeme mit vier Lautsprechern, die im Vergleich zum Aufwand einen begrenzten Gewinn des Raumklangs unter Einführung weiterer unerwünschter Effekte bewirkte. Da sich keine Aufzeichnungssysteme für dieses Tonsystem wirklich kommerziell durchsetzen konnten, war das System wenig verbreitet. Die sehr verwandten Dolby Surround oder 5.1-Systeme haben sich durch leichte Veränderung des Systems im Film- und Kinobereich dagegen sehr durchgesetzt. Für den Film war es sehr wichtig, dass Einführung des zentralen Center-Lautsprechers die räumliche Positionierung der Schauspieler deutlich verbessert hat, und die Verwendung von Subwoofer bzw. des LFE-Kanals auch preiswertere immersive Installationen durch Satelliten-Lautsprecher ermöglicht hat. Als großer Kritiker der Quadrofonie entwickelte Michael Gerzon 1973 mathematisch-physikalisch fundierte Ambisonics-Verfahren, um auf einer beliebigen Anzahl von Lautsprechern einen Raumklang aufnehmen, aufzeichnen und wiedergeben zu können. Während ein System nullter Ordnung mit einem einzigen Kugelmikrofon und Kugellautsprecher realisiert werden kann, sind ab erster Ordnung schon mindestens acht Lautsprecher für eine sinnvolle Reproduktion erforderlich. Leider müssten sehr viele Mikrofone für das Verfahren alle koinzident in einem Punkt positioniert werden, was mit herkömmlicher Aufnahmetechnik nicht optimal realisierbar ist, und dafür von Gerzon besondere Mikrofonkonfigurationen entwickelt wurden, die das koinzidente Signal rekonstruieren können. Im Bereich der Meteorologie gibt es Ultraschallanemometer, die tatsächlich die Luftbewegung im Raum in einem einzelnen Messraum bestimmen können, nur ist dies aktuell nur im Aufnahmebereich räumlich gemittelt bis zu 200mal pro Sekunde bis maximal in den Infraschallbereich möglich. Eine frühe berühmte und umstrittene Raumklang-Installation war der Philips Pavilion bzw. Poème électronique auf der Weltausstellung Expo 58 in Brüssel, wo die an hyperbolischen Trajektorien aufgestellten Lautsprecher als diskrete wandernde Tonquellen benutzt wurden. Zur Weltausstellung Expo 70 in Osaka entwarf Karlheinz Stockhausen für den deutschen Pavillon das Kugelauditorium, in dem die Ansteuerung der Lautsprecher durch einen Drehhebel erreicht werden konnte. Ein ähnliches Verfahren ist das Vector Based Amplitude Panning (VBAP)-Prinzip, das von Ville Pulkii 1997 wissenschaftlich ausgearbeitet wurde. Im Gegensatz zu den früheren Installationen verlangen ambisonische Verfahren sehr regelmäßige Lautsprecherpositionen, da das Verfahren ideal als Fourier-Synthese auf einer Sphäre interpretiert werden kann. Praktisch gibt es auf einer Kugeloberfläche nur wenige exakt equidistante Punktmengen auf Basis der platonischen Körper, dazu sind volle Sphären eine architektonische Herausforderung und aufgrund unseres geringen Lokalisationsfähigkeit im Vertikalen nur von begrenztem Nutzen. Daher werden die Lautsprecher nur in einer oberen Halbsphäre mit nach oben abnehmender Anzahl pro Lautsprechern im Radius installiert. Die ambisonische Raumklang-Demonstration ist ein Teil aus dem Stück „Parallel“ von Paul Modler, das bei einer Aufführung zusätzlich bewegliche Hörner und ein Wellenfeld-Array anspricht. Im Gegensatz zu Mehrkanal-Tonsystemen berücksichtigt der binaurale Raumklang die Kopfübertragungsfunktion und ist nur für die Erfahrung über Kopfhörer gedacht. Zur Erzeugung von binauralen Signalen kann man auf Kunstkopf– oder In-Ear oder Orginal-Kopf-Mikrofone (OKM) zurückgreifen. Alternativ kann man Schallquellen synthetisch über die HRTF auf die Wirkung auf die Ohren berechnen. Zur Erfassung der individuellen HRTF werden Mikrofone in die Ohren installiert und robotergesteuert Lautsprecher an verschiedene Positionen um die Versuchsperson gefahren. Die Lautsprecher spielen dann jeweils Klicks oder Chirps, um die Impulsantwort des Signals, die Head-Related Impulse Response zu bestimmen. Die HRTF ergibt sich dann als Fourier-Transformite der Impulsantwort. Alternativ können auf niedrigerem Niveau auch halbsphärische Lautsprecher wie im Klangdrom statt einer langsamen Robotersteuerung verwendet werden. Impulsantworten existieren grundsätzlich nur auf einer begrenzten Anzahl von Filterpunkten, zwischen denen nach VBAP-Prinzip auch Zwischenpunkte berechnet werden und Klänge aus beliebigen Richtungen im zwischen Punkten im Diskretisierungsgitter abgebildet werden. Eine Herausforderung bleibt die Kopfbewegung, die mit Head-Trackern für einen immersiven Eindruck berücksichtigt werden muss, man sich also zum Klang hindrehen können muss. Das ist eine entsprechende Herausforderung der Virtual Reality, wo die Bewegung des Kopfes auch unmittelbar in die Darstellung berücksichtigt werden muss. Die räumliche Abbildung von Tönen ergibt auch neue Möglichkeiten in der Sonifikation, um Informationen nicht nur klanglich unterscheidbar sondern auch räumlich lokalisiert abgebildet werden kann. Dabei ist zu berücksichtigen, dass visuelle Eindrücke akustische Ereignisse verfälschen können. Bei steigender Komplexität der verwendeten Modelle, muss das Verständnis für Sonifikation auch erlernt werden. Literatur und weiterführende Informationen S. Carlile: Psychoacoustics, Signification Handbook, Logos Publishing House, 2011. B. N. Walker, M. A. Nees: Theory of Sonification, Sonification Handbook, Logos Publishing House, 2011. A. Hunt, T. Hermann: Interactive Sonfication, Sonification Handbook, Logos Publishing House, 2011.M. A. Gerzon: Periphony: With-height sound reproduction, Journal of the Audio Engineering Society 21.1: 2-10, 1973. V. Pulkki: Virtual sound source positioning using vector base amplitude panning, Journal of the audio engineering society 45.6: 456-466, 1977. M. Noisternig, T. Musil, A. Sontacci, R. Holdrich: 3D binaural sound reproduction using a virtual ambisonic approach, Virtual Environments, VECIMS ’03. 2003 IEEE International Symposium on Human-Computer Interfaces and Measurement Systems, 2003. Podcasts M. Völter, R. Vlek: Synthesizers, Omega Tau Podcast, Episode 237, 2017. T. Pritlove, U. Schöneberg: CRE238 – Neuronale Netze, CRE Podcast, Metaebene Personal Media, 2015. M. Völter, C. Osendorfer, J. Bayer: Maschinelles Lernen und Neuronale Netze, Omega Tau Podcast, Episode 259, 2017 S. Trauth: Klangdom, Funkenstrahlen Podcast, Episode 85, 2016 P. Gräbel: Der Schall, Nussschale Podcast, Episode 16, 2017. T. Pritlove, S. Brill: CRE206 – Das Ohr, CRE Podcast, Metaebene Personal Media, 2014. S. Plahl: Der Klang einer Armbewegung, SWR2 Wissen, 2013.
John Chowning is the pioneer in FM synthesis, which he discovered as a musician by ear
John Chowning is the pioneer in FM synthesis, which he discovered as a musician by ear
Kent Devereaux is the President and Chief Academic Officer of the New Hampshire Institute of Art (NHIA), a private, nonprofit, accredited college of the arts located in Manchester, New Hampshire, approximately one hour’s drive north of Boston. An accomplished educator and academic leader, Kent’s career has taken him around the world and back again. Before assuming the presidency at the New Hampshire Institute of Art in January 2015, Kent served as Professor and Chair of the Music Department at Cornish College of the Arts, where he also served as Artistic Director for the college’s presenting series, Cornish Presents, and where he co-founded and directed the Seattle Jazz Experience, a youth jazz festival—the latter earning him Downbeat magazine’s Jazz Education Achievement Award in 2014 and Cornish College of the Arts Distinguished Alumni Award in 2015. Kent also served on the faculty of both the School of the Art Institute of Chicago and the California Institute of the Arts for many years. In addition to his experience in traditional academia, Kent spent over a decade working in the technology and online education sectors including stints as Senior Vice President of Editorial and Product Development at Encyclopedia Britannica, where he was instrumental in transforming that storied educational publisher from a print to online business model in the 1990s, and as Senior Vice President and Dean of Curriculum at Kaplan University, where during his tenure enrollment at the for-profit online university expanded from 350 to over 45,000 students. Kent’s collaborations with other artists have been presented around the world including performances at the Brooklyn Academy of Music’s Next Wave Festival, the London International Theatre Festival (LIFT), and elsewhere. Kent’s own work as a director, composer, and performance artist has also been presented at Chicago’s Steppenwolf Theatre, Seattle’s On the Boards, and Minneapolis’ Walker Arts Center, among other venues. Originally from California, Kent studied music composition with Gordon Mumma while attending the University of California at Santa Cruz, jazz piano, composition, and arranging with Art Lande, Anthony Braxton, Gil Evans, and Jim Knapp at Cornish College of the Arts in Seattle, and further graduate studies at Stanford University with legendary computer music pioneer John Chowning. A passion for exploring the related arts also led Kent to Chicago where he earned a Master of Fine Arts at the School of the Art Institute of Chicago (SAIC) and to Indonesia, as a Fulbright Fellow to study Javanese shadow puppetry and its music.
ZKM Kunstpreise: Giga-Hertz-Preis 2013 | Preisverleihung Sa, 30.11.2013 Highlight des Festivals IMATRONIC ist seit dem Jahr 2007 die Verleihung der Giga-Hertz-Preise. In diesem Jahr präsentiert der Abend einen spannenden Überblick über das künstlerische Schaffen im Bereich der elektronischen Medien: Von Tanzperformances mit Noise-Musik, von SoundArt-Präsentationen über Raummusik im Klangdom des ZKM bis hin zu Live-Musik und Stücken, die im EXPERIMENTALSTUDIO des SWR produziert wurden – das Publikum erwarten spektakuläre Werke und spannende Präsentationen. Die mit jeweils 10.000 € dotierten Giga-Hertz-Preise für Elektronische Musik werden gemeinsam vom ZKM und dem SWR EXPERIMENTALSTUDIO Freiburg vergeben und gehen 2013 an John Chowning sowie Francis Dhomont. Die Produktionspreise, jeweils mit 8.000 € dotiert, ehren junge, internationale KünstlerInnen, welche durch aktuelle Kompositionen und Produktionen im Bereich der elektronischen Musik auffallen. In diesem Jahr sind die ProduktionspreisträgerInnen Daniel Blinkhorn, Leo Hofmann, Alexander Schubert, Ying Wang und Roque Rivas. Der Schwerpunkt liegt in diesem Jahr auf SoundArt: Die gegenwärtige Bedeutung dieses Bereichs wird mit drei PreisträgerInnen entsprechend gewürdigt. Die PreisträgerInnen, die am Samstag, den 30. November, am ZKM beehrt werden, sind der Wegbereiter der Musique Concrete Pierre Henry, die Künstlerin Evelina Rajca sowie der Brite Anthony Elliott. /// Sat, 30.11.2013 The biggest four-day festival of electronic music in Germany presents the most recent developments in this area. This year the highpoint will also be the Giga-Hertz Awards and PIANO+. This year’s Giga-Hertz Awards ceremony, on November 30, 2013 inspires viewers with spectacular and fascinating presentations. Premier performances and other concerts in connection with electronic music and piano are offered to an interested public over three days as part of PIANO+. PIANO+ As a festival within a festival, PIANO+ forms a fixed aspect of IMATRONIC. The core theme of this series of concerts is the connection between electronic music and piano. Twenty works, as well as premier performances, will be presented in four concerts over a three-day period – among others, by composers such as Alvin Lucier and Luc Ferrari including new compositions by 2011 award winner Anthony Tan.
In film, sound is the partner to the image. The ultimate compliment to sound designers, mixers, and editors is when no one actually notices the work. Sound designer Ren Klyce brings a professional's view to cinematic sound as a subtle, supporting character to the image, and the reasons why it is so often misunderstood and underappreciated. Our work is not just about the aesthetics of understanding how sound and dialogue enhance a film creatively, but it requires an understanding of human audiology, the behavior of sound waves, and the use of a great deal of technology. In this talk, I will play some excerpts from some well-known films, such as The Social Network or The Girl with the Dragon Tattoo, and deconstruct how film sound tracks are made in collaboration with the director. Born in Kyoto Japan, Ren Klyce grew up in Mill Valley, California. He studied Electronic Music at UC Santa Cruz with Gordon Mumma, David Cope, and Peter Elsea and was trained in the traditional tape-based techniques of Musique Concrete. After meeting John Chowning at a lecture series in 1983, Klyce enrolled in the summer workshop at the Center for Computer Research in Music and Acoustics (CCRMA) and composed three pieces on the original SAM Box. Because of his experiences in the Electronic Music course at UCSC, Klyce became increasingly interested in computer music and the use of multiple speakers for playback. He went on to design sound for films such as Se7en, Fight Club, Being John Malkovich, and Where The Wild Things Are. He has been nominated for five Academy Awards — most recently for the films The Social Network and The Girl with the Dragon Tattoo. He is currently working on the hit web series House of Cards and on the upcoming film Oblivion.
Lecture & musical demonstration by John Chowning, professor emeritus of music and founder of Stanford's Center for Computer Research in Music and Acoustics. He talks about his invention of FM synthesis, his musical compositions, and the Stanford environment that fostered his work.
Concert 7 of EuCuE Concert Series XXVI 1) Habitation/Home Sweet Home 12’ Ron Herrema 2) Med lekande kval 3’ Paulina Sundin 3) Faute de la musique 12’ Serena Alexander 4) Balagan 15 1/2’ Benjamin Thigpen 5) Dream Mechanics 13’ James Wyness 6) Turenas 11’ John Chowning
Panel Discussion with: John Chowning, Professor of Music Emeritus; Leonard Herzenberg, Professor of Genetics Emeritus; Cal Quate, Professor of Applied Physics Emeritus; Kathy Ku, Director, Office of Technology Licensing; and Niels Reimers.