A revolution in music happened happened at Princeton 60 years ago when some music-loving computer engineers happened upon some musicians who were enamored with a new IBM computer installed on the third floor. Their work changed the sound of music. In this five-part hidden history podcast, we unearth some trippy early computer music, and show how it made possible the music we take for granted today.
In our epilogue episode, we look at how an engineering professor, Naomi Leonard, is collaborating with dancers to show how birds fly in a flock without bumping into each other; how robots can reflect our humanity back at us; and how other peoples’ rhythmic movements affect our nervous systems. Engineering faculty at Princeton are increasingly working with artists to create an array of projects, and in the process, shining light on how people use, and perceive, the marvels that engineers create. We wrap up this oral history podcast series with a story about how a computer musician got a late night call from Stevie Wonder to talk shop, and in the process may have changed the way the music legend thought about digital voice synthesis.
Has digital music reached the point of diminishing returns? Has it all been done, and heard, before? At the start of a new millennium, a crew of Princeton engineers and musicians answered this question with a resounding no, building the now-famous Princeton Laptop Orchestra. As a Princeton music grad student in the late 1990s, Dan Trueman worked with his adviser, Perry Cook, on building an unorthodox digital instrument played with all the expression of a fiddle, but sounding more like a robot. And rather than running the sound through a single speaker pointed at the audience, they created a 360-degree ball of speakers, so that the device had the sonic presence of an acoustic instrument. When Trueman returned as a member of the faculty several years later, Trueman and Cook (who had a joint appointment in engineering and music) set their minds to creating an entire ensemble of those unorthodox instruments. And in the process, they created a whole new genre of music and digital creative expression.
Paul Lansky is the most celebrated and musically influential of the computer musicians at Princeton, and it isn’t only because he was famously sampled by Radiohead on their classic album “Kid A.” His work expanded the boundaries of computer music and speech synthesis for art into territory far from the art’s musically difficult twelve-tone beginnings. In the words of current Princeton Music Professor Dan Trueman, “He invites you to listen however you want… It’s this place you go and your find your own way.” Or as his former student Frances White said, Lansky was able to bring “computer music into a much more open and beautiful place.” This episode is a celebration of the life’s work of Paul Lansky, as well his collaboration with a Princeton engineer, Ken Steiglitz, that made much of that work possible. We’ll hear a wide sweep of his computer music from throughout his multifaceted career. And we’ll look at Lansky’s work building software, as well as the similar efforts of fellow composer Barry Vercoe, whose CSound technology left a lasting imprint on software musicians still use today.
Imagine using a computer to synthesize music, but not being able to hear it as you built it. That's how it was in the 1960s - musicians only heard what they were composing in their mind's ear, until the project, usually riddled with mistakes, was finished and processed at a far-off lab. This presented a challenge to the Princeton interdisciplinary team of engineer Ken Steiglitz and composer Godfrey Winham. They worked to build a device that would translate the ones and zeros generated by the IBM into analog sound, the only form of sound human beings can hear. The work they did together represented a watershed in the use of computers as a tool to create music. Winham saw the potential of the computer as a musical device, and spent his best years building tools to make the giant machine more user-friendly to musicians. And Steiglitz was uniquely positioned to help Winham realize his vision. This episode is the poignant story of their teamwork, as well as of the community of composers that created a wild batch of music on the IBM, music that has largely been long forgotten. But we’ve found it, and there are lots of clips of that music in this episode. We’ll take a detailed look at how humans are able to hear digital music. And we’ll explore the amazing story of Godfrey Winham, Princeton’s first recipient of a doctorate in music composition. Beyond his advances in music generation software, digital speech synthesis, and the development of reverb for art’s sake, he was a fascinating character.
When the Computer Center opened along with the Engineering Quadrangle at Princeton in 1962, who knew that the Music Department would be one of its biggest users? The composers were there at all hours, punching their cards and running huge jobs overnight on the room-sized, silent IBM 7090. Working without the ability to hear what they were creating, listening only to the music in their minds, these classical music composers managed to synthesize some of the trippiest music you’ll ever hear. But it was also the sound of progress, as they broke new ground in how digital music is created. Some of their advances live on to this day in music synthesis software. Much of the music you’ll hear on this episode was created by James K. Randall, the Princeton music professor who is credited with showing the computer’s early promise for creating nuanced music.
This episode is the story of what happened when a Princeton composer, who was inspired to create some of the most challenging music ever written, decided it could be most reliably performed by a machine. His work to realize that machine led to the birth of the electronic synthesizer as a device upon which one could compose music. And it led, indirectly, to the digital music revolution. The device wasn’t a computer – it was an early analog synthesizer in Manhattan, co-owned by Princeton and Columbia. This episode will take you inside Milton Babbitt’s work with his “robot orchestra.” You’ll get to hear the music it made, and how Babbitt and the engineers who built it carved out a path that would lead to digital music as know it today.
A revolution in music happened in the Princeton Engineering Quadrangle, but chances are, you don’t know the story. Sixty years ago, some music-loving computer engineers happened upon some musicians who were enamored with a new computer installed on the third floor. The work they did together helped turn computers – at the time, a hulking, silent machine – into a tool to produce music. Their innovations made it easier to hear that music. That was no mean feat back then. Then they made it possible for a computer to make that music better, more nuanced. And they helped make it possible for computers to synthesize speech. What computers are able to do today to help musicians realize their vision owes a lot to the work done at Princeton. Much of this history has been effectively lost, gathering dust in far-off libraries. And the music they made has been largely forgotten as well. Over the five episodes of this series, we will tell that story. You’ll get to meet some fascinating people. And you’ll get to hear that music. It’s the sound of history being made. Engineers thrive on collaborations and intersections. Their collaborations with artists continue to this day. We look forward to sharing with you “Composers & Computers,” coming in early May.