RSS feed RSS Twitter Twitter Facebook Facebook 15 Questions 15 Questions

Interview with Laurie Spiegel

img  Tobias

It is love at first sight and an encounter which will change her life. In 1973, she starts working at Bell Labs, learning to program that era's giant computers to do music and images in ways that could only be dreamed of before. Meanwhile she is making a living doing performances and teaching and composing scores for film and video productions. She was sharing the excitement of musical discovery and exploration and the sense of freedom and possibility and the intensity of New York's downtown arts community of that era with like-minded creative friends including Suzanne Ciani, Pauline Oliveros, Philip Glass, Terry Riley and Steve Reich. She discovers that computer  programming is actually as much fun as the actual composing.

In the late 1970s Jef Raskin gives her his prototype Apple II 48k when he was done with it and she becomes active in the small computer counterculture, helping produce the alphaSyntauri, often cited as the first professionally usable computer system for music at an affordable price, and she sends audio across the country (slowly by 300 baud modem) and writes about digital distribution of music as early as 1981. She is also consulting for Eventide, teaching at Cooper Union and NYU, and then moves on to Toronto to direct software development for the McLeyvier, an ambitious but ill-fated computer controlled analog synth. During this period she also composes quite a few written pieces for "old fashioned" instruments, few of which have ever been heard outside her loft.

In 1985 she buys herself a Mac 512k and immediately wants to push sounds around with the mouse. Just for herself, she writes "Music Mouse", turning the Mac itself into an instrument allowing intuitive play, and she refines and refines it, combining programming and composing into a single process. Purely by word of mouth, Music Mouse quickly turns into an in-demand application, among the earliest Music Software available to regular consumers. Even today, its easy interface and many sound processing options make it both alluring and fun.

As "Music Mouse" turns into a cult product, Spiegel is slowly withdrawing from the New York scene and turning towards different challenges. In a world with many urgent issues, she simply does not find music "worth while enough to do". As she points out in this interview, which was conducted for a long article in German magazine "Beat", she finds that "working up to actually recording has become a challenge". Though she still often improvises and enjoys making music for herself, she rarely records or writes down what she creates any more. Maybe then, her many loose sampler contributions and four albums will constitute the recorded legacy of an artist, who still, despite or because of her silent impact on the scene, has remained a source of inspiration.

Hi! How are you? Where are you?
You'll have to ask my dog how I am to get an objective answer on that. I'm in my loft in New York.

What's on your schedule at the moment?

Right now? This interview.

As I've grown up in a world in which synthesizers have become just as natural as a violin, it is hard to understand that people thought it strange at the time that you should first be playing the guitar or other "folk" instruments and then turn towards electronic instruments. How unusual was it when you started out? How did the transition feel to you?
At the time electronics in general, and computers more specifically, were thought of as dehumanizing, anti-intuitive and in general counter to how music was conceptualized qualitatively. They inherited in people's minds the qualities attributed to those who possessed and controlled such technology. Who had computers? Banks, the government and the biggest of businesses. For most people the idea of computers represented distance, powerlessness, frustration, bureaucracy, clinicality, emotionlessness. The college I went to did not have, and would not probably have wanted, any kind of computer. To a lesser degree than computers all electronics had that kind of vibe in most peoples' minds.

Only a very few people had even thought of the idea of using electronic technology to make music, or if they thought of it at all they wrote off the idea as undesirable. I certainly never heard of the theremin or ondes martineau or others of the few electronic instruments that already existed until I was in my 20s, let alone having access to any. For most people you mentioned it to, using electronics for music was a completely new and foreign concept that required extensive explanation.

The first time I saw a synth (side note - we didn't like that word then because of "synthetic" implying false or artificial instead of real music made on real instruments), the first time I saw one, a Buchla modular in Mort Subotnick's old studio over the Bleeker Street Cinema, it was a mind blow and I fell madly in love with it. After starting to work with it I began hearing everything differently, music, traffic noise... It was a revelation. Of course that was unlike most of today's "synths", not being based on a keyboard model or such concepts as notes. That was an instrument meant for working with the nature of sound itself. When I tried to communicate my excitement to others usually it fell flat.

With the complex user interfaces at the time- did you have to be something of a technician or a mathematician to work as a composer of electronic music?
You think those were complex? The early modular instruments were hands on, random access and entirely intuitive in the same ways as traditional acoustic instruments. Computers were more complex, but the software was much simpler, and because each person had to write their own you knew it thoroughly and controlled everything. Today's interfaces with their nested hierarchical menu systems and inflexible hard-wired musical constraints are the difficult complicated ones. (I say "hard-wired" but I mean "hard-coded" from the user's perspective, because few are open source and few users know how to code for these complicated modern machines that have so many entrenched layers of software.)

Yes, by and large we all had to be our own technicians, though it was also true that some more established composers hired assistants who did tech work for them. I always thought they were at a disadvantage. To really get into the essence of what an instrument can do you have to understand it deeply and thoroughly. Otherwise you risk failing to perceive what its real strengths and character are. Remember, we did not yet have the frustrating separation of sound definition from composing for the sounds defined that was enforced on everyone by the design of the MIDI standard in the early 80s. The MIDI protocol design, created by people relatively new to the art and still thinking in very traditional ways about musical structure, unnecessarily assumed, and then enforced on us all the idea that you compose for predefined fixed instruments. That often mandated doing a complicated set of kluges to make it seem like the nature of an instrument was evolving while it was being played. But that kind of thing was natural to do on early synths if you knew your technology.

I find the problems that are the most frustratingly over-complex these days often result from being stuck in design models that make unnecessarily assumptions about musical structure that you then have to find workarounds for.

It made sense that composers pretty much all wrote their own software or built their own circuits in the early days. Part of your individual identity as a creative artist is that your individual working methods and working set-up are highly personal and individual. It would have seemed absurd in those days to expect another person, especially a group of people not musicians and who didn't even know you or your music, to configure a computer or synth for you. Such work was an integral part of the composing process. I find it has a very stiff and somewhat off-putting feeling by comparison, working now with fixed pre-written off-the-shelf software or hardware units. The music you'd find in an early technology would lead you to modify that technology to make it easier for the kind of music you found in it to emerge. So their was a back-and-forth. You'd evolve both the music and the tools so they worked best together on a specific piece. The music was always influencing me to alter the technology to serve it better, and conversely, the technology would suggest things to try and lead me to new places in the music.

In a previous interview, you mentioned that one of the charms of the Buchla was its capacity to use "real sounds" and play them for people. So did your first interest rather lie in tapping the timbrel potential around you than in creating unheard-sounds from scratch?
I don't think you got my meaning there, which is a bit of a mind-blow, because it shows how much our assumptions have changed. By "real sounds" I didn't mean pre-existing sounds, as in music concrete or captured sound effects. I meant by "real sounds" sounds you can hear, as opposed to little pencil marks you make on a piece of staff paper that represent notes that you might have to wait years or forever to hear as "real sounds". That was certainly one of the biggest excitements of electronic music for me when I first tried it. You could work on music the way an artist did on a painting, seeing (hearing) what you were making while making it. That was not possible before, except for pieces you were writing for a single instrument that you played yourself and that were technically within your own abilities to play.

Keep in mind that at that time the idea of entering a note on a stave and hearing it right away as you do in any of today's notation software tools would have seemed as far fetched then as it would seem today to be writing a screenplay and have it appear fully staged and edited on your TV the moment you wrote each line. (This will probably be on the market by the time this gets published, right?)

Beyond being able to actually hear the final sounds while composing, the big thing for me was being able to create and interact in real time with with sounds never heard before. That was absolutely awesome. Instead of thinking of notes and measures and that the players need it written so they will have one hand free to turn the page every so many measures the only limits to the sound were how much equipment you had and what. And this was something you could actively work to expand. The instrument being composed for was now the speaker cone, and theoretically you could do absolutely anything it could, which is pretty much anything the ear can take in.

Remember, analog instruments were large, rare, expensive and each one was custom and unique. Most of us only had access to them through shared studios. They were modular, and you could not only build complex things from the modules, like with Lego toys, you could also built or code up additional modules.

But there were whopping limitations too. The option of taking the instrument to the audience did not exist for most of those very of us few who used them. With early computers live performance was even more out of the question. Until personal computers with enough horsepower to generate sound in real time you were extremely lucky to have a few hours access a week during off-hours to a room-sized computer in a research lab.

Recording technology was also still rare and expensive, this being before audiocassettes let alone digital media or public access to the the internet. Reel-to-reel 2-track analog tape decks were generally the only way that music could be taken to people to be heard, and these were not all that common in private ownership either. To copy a tape you needed 2 of them and doing it was both cumbersome and heavily degrading to the signal. Record companies still pretty much monopolized access to the audience via that medium except for a very few brave souls such as Phil Glass and John Fahey, who were both among the earliest musicians able to establish their own small run record labels.

The normal way to get music to an audience for a composer was to write a score on paper then try to get someone to perform it, at which point you could finally hear your own piece for the first time. Then you'd hope the performers would be able to get it onto an LP, that someone from a record company would be at the concert or a reviewer so a record company might read about your work and come to another concert if you could get one. When electronic technology made it possible to compose with actual sounds that went beyond what you could play by yourself on a conventional acoustic instrument and to record the music then play it for an audience, it was a truly major revolution. But it was already truly major to simply be able to hear what you composed in real sound, with your ears instead of your imagination.

You mentioned that you felt a "need for greater control, complexity, replicability, subtlety and precision writing". Was this the reason you started writing you own algorithms at the time?
Not really. Algorithms were a separate thing, a wonderful new and exciting dimension, to be able to create an ongoing musical process by just describing it. The concerns you mention here were nothing to do with that, but were frustrating limitations of the analog technology of that period.

Did I actually write what you quote here? I ask because there was plenty of subtlety in the analog synths. Lack of subtlety was never among the frustrations. The amount, kinds and complexity of control for a hardware analog synth was limited by the amount and nature of the hardware you had. There was also no memory except that you could record the output on 2-track reel-to-reel tape. There was no way to get the exact same setting back again after you changed them, nothing like what we know as "presets". And precision? Half the oscillators in one of my analog instruments used to drop about a semitone when the refrigerator went on while the others only dropped about a quartertone.

With computers you could have any kind of module you wanted simply by describing what you wanted to the computer, and any number of copies of it too. Of course you did have to learn to speak the computer's language, which when I started was a 24-bit assembly language called DAP and also FORTRAN IV. But once you wrote a description the computer could understand you could then replicate as many instances of it as you wanted.

It worked the same way for musical algorithms, or procedures. You could describe  a musical process, for example a way to make a decision about what follows what, or when not to do something, to the computer in its own language, and you could build into your description ways to interact with that process using knobs or switches or whatever kinds of gizmos you could connect to the computer. I figured that whatever I can satisfactorily describe about how I decide things while composing, I should off load that decision-making to the computer so I can concentrate on aspects of the work that I can't describe. This was a wonderful exercise in self-awareness.

The way computers are typically used in music these days is vastly different because you have so many layers of software, each imposing a concept space and a set of limitations and presuppositions, interposed between the computer and its user. Among those assumptions are they everyone will work pretty much the same way, that you record then edit, process and mix, that what an instrument is gets defined and remains static before you start composing, and other such built-in assumptions. The idea of the tool being in itself an expression of the user's own personal musicality is still out there and to some degree accommodated by modular systems such as Kyma and Max but I believe that is by far the minority of music tech in use.

With the sheer size of the old synthesizer systems, live performances and touring must have been hard to realise. Did you miss that element of direct interaction (which, for example, would have been easier to obtain in folk or rock)?
With electronics, until the Apple II, most of my performances were just tape concerts. I rarely enjoyed giving them. It was not really playing music for people the same way as when I'd performed on guitar or lute before composing took over. Once electronic and computer instruments got small enough to be portable and could be played live in concerts it turned out to be even more frustrating because I was limited musically to what I could do live in real time on something like an Apple II. The technology of that period was much simpler than we have today and capable of music less in real time. Overall I never was much of a performer. But I never wanted to perform. I do very much enjoy playing music live for just one or a couple of people, but my impetus to compose came from wanting to hear music that I had not been able to find anywhere, so I had to make that music myself. That's why most of my instrumental works are fairly easy to play pieces for solo keyboards and plucked instruments, and most of my electronic works are large resonant sonic textural compositions.



German composer Klaus Schulze often referred to his huge equipment parks as his "beloved" synthesizer. Do you, too, have a stronger emotional contact with these instruments than with the software tools of today?To a tremendous degree. I identify completely with Klaus's feelings for his instruments. It's not just that those are physical instruments that you have a long history of physical contact with and that live with you in your home and are part of your emotional life, like a guitar or a fine old violin whose every scratch mark and wear pattern and physical sensation you know. It's that in the early days each electronic instrument was a unique custom system that you participated in designing or configuring. Or if not, as with the early more mass produced instruments I have such as the alphaSyntauri or McLeyvier, they are heavily customized and personalized, and by modern standards few were made and far fewer still survive.

It has often been said that you helped to co-create the "New York Scene" for electronic music. How do you remember that scene?
I don't take any credit for creating it but I certainly participated. In many ways those were wonderful times, with a great sense of freedom and willingness to try things, to see what would happen, to see where something would lead, to try something that had never been done, not to see if it could be done but because you wanted it enough to go through with the work of bringing it into existence.

There were a lot of exciting things happening musically in the USA in the 70s. Did you feel a closeness, a sort of mutual respect and inspiration between your own work and that of, say, Pauline Oliveiros, Alvin Lucier, Suzanne Ciani or even Phillip Glass - as different as the approaches may have been?
Absolutely. And I'd include in that list also Mort Subotnick, Terry Riley and Steve Reich. There was certainly some competitiveness here and there but much more so there was, at least as I remember, a sense of community. We were all part of a small counterculture that was up against a musical establishment that monopolized funding, performing venues, record labels, on one side from the commercial (profit-motivated mass-sales) music establishment and on the other side by "uptown" music, which was essentially post-Webernite atonalism. All of the composers who you and I mention here were at that time outsiders trying to remusicalize and rehumanize composing after several decades of extremely academic domination of non-commercial music.

I'm glad that you mentioned Pauline and Suzanne because women composers were still few and far between. Technology is largely responsible for how much more common women composers are now, because it allowed women to get their music to the point where it could be heard (versus silent dots on paper), so the public and powers-that-be could learn that we also could do this. Women are still to some degree underdogs in composing. Throughout the 70s I earned much of my living by composing soundtracks for film and video. But jobs, or maybe people who would a woman as their composer, were few and far between. Even today, several decades later, the percentage of major motion pictures scored by women is still appallingly low.

Are there composers today which you feel close to musically?

Honestly I tend to spend more time with music I've loved for a long time than exploring new music. I've also been very much enjoying music from other cultures, which is so much more available than when I was young. To answer you more specifically, yes. I do run across quite a few composers today with whom I feel close. There is so much more musical freedom than when I was young, both technological and aesthetic freedom that it is far more frequent to find music that I could almost, if not quite, have composed myself.

You then retreated from the New York music scene somewhat in the 80s, due to disappointment about technology getting more important than the music itself. Was there a particular moment which made you realise this disappointment or was it a gradual process? And: Do you feel this has improved somewhat over the last years?
Gradually, but I need to clarify what I meant by the reason you gave. It was not at all because technology was getting more important as you put it here. It was that technique, any kind of technique including non-technological kinds like virtuoso chops or complex combinatorial math, was being overvalued compared with musical content, which is direct authentic person-to-person human expression. There was too much pursuit of newness for its own sake, rather than using a new method because it made it easier to embody deeper more honest or different kinds of emotion or other such subjective feelings in sound.

In a fortunate coincidence I found myself wanting to back away from the new music performance scene at the same time that it was first becoming possible to earn a living by creating music technology. So instead of performing my music while composing soundtracks on the side I was able to live by working on music software (the Syntauri, McLeyvier then Music Mouse and various consulting work) with a bit of teaching on the side (Cooper Union, NYU).

Your latest album, "Obsolete Systems" highlights your work for four synthesizers from the 60s. What was the attraction of looking back at precisely these vintage machines?
There were actually six different instruments on that cd, ranging from the 1960s into the 80s, so I'm not sure which four you mean. Regardless, all were the electronic instruments I had spent the most time with and very much loved. Each of them was unlike anything surviving in use today. Each deserved to have its voice heard and to be documented by music, yet except for the Buchla instrument the others were just about unrepresented in the zillions of recordings obtainable today.

Every musical instrument, every creative technology, has its own musical personality and voice. If you try a different guitar or different pianos you'll find they have different voices and that if you listen to them carefully each one will lead you to improvise different music. If that's true of such similar instruments, think how very much more of a unique voice an instrument so very dissimilar to any other would have. It holds an entire musical universe that is qualitatively different from any other.

What are your current compositional challenges?
The biggest challenge currently is to have music feel worth while again to do. There are so many far more important things happening in this world. Locally I've been heavily involved in animal rights and rescue work, and globally more concerned with the complexities of achieving a non-self-destructing ecological situation, versus the head-long drive toward even more devastating human overpopulation than we already have on this planet and climate destruction that will cause indescribable levels of social disruption, suffering and loss. How can music be worth doing when the whole planet is on the brink?

Working up to actually recording has become a compositional challenge. I often play music for music. I improvise a lot. Usually it's easy to find material I want to build on and that could easily evolve into a decent piece. But it doesn't feel worth the time to record it or write it down. So I don't. People say I should do that more often. But it feels both self-indulgent and irrelavent at this point to put that kind of time and energy into mere music, which the world is very full of already, far fuller than ever before.

What was the stimulus to write "music mouse"?
When I got my first Mac, a 512ke, with the first mouse I ever saw, to me it seemed that absolutely most natural thing for a music person like me to want to do was to push sound around with that mouse. So I coded up a way to and kept messing around with making the result more musical. After awhile so many friends and then other people were asking me for copies that I had to find a more organized way to get it out to larger numbers of people. So it became a "product", but I wrote it really just for myself.

Are there still orders coming in for the program today?
I do still get an order for one now and then but it is so far out of date that it only runs on obsolete computers. I do want to port it to the computers of today. Meanwhile those of us who have obsolete computers lying around can still use it, and it runs fine I'm told in some emulators. For example the Atari ST version of Music Mouse apparently works fine on PC using the STEEM Atari emulator. (I only have Macs, Ataris, Apple IIs and Amigas, no PC so I'm just reporting what I've been told here.)

Has it happened that you went to see a concert and someone used "music mouse" for their performance?

Yes, and it's very pleasing. At times I've heard people use the program in ways I'd never have thought of myself in a million years. It's wonderful when that happens.

While you've released tons of tracks on various compilations, only few full-length albums have been released over the years. How come?
There are several answers to this question, and probably none of them really explain this.

The obvious answer is that if someone asks me for a piece for a compilation it's relatively easy to do compared to a cd.

As a musical form, unlike the LP, which was 2 short sides with a break between, the cd format is aesthetically much more challenging. It requires a sequence of musical movements that work as a single hour-long experience when run straight through without breaks. Sometimes a piece you really want to put out on cd just doesn't work aesthetically in context of the other pieces and it has to be left out or something not as good inserted because the transitions work better. I think it's good that with electronic distribution the length and form of a distributable production can be whatever the creator wants. With electronic distribution I do hope to be able to make public more of my music, in forms and durations that are determined by musical or aesthetic qualities rather than by mechanical or commercial constraints.

A deeper answer is that I just don't give distribution, including making more full length cds, a high enough priority for it to happen. I've had several partially completed cds and any number of pieces-in-progress laid out on my hard drives for years. But somehow I just don't find the motivation to bring them to the "product" stage of their evolution. Once the music is made and the creative work is done, the rest of the process is something I find it really difficult to get myself into. And I hate the whole business process that comes after that. My general experience has been that record labels want you to do absolutely all the work and front all the money and then you get nothing back except being told not to do what you want. Or if you do the production and distribution yourself you have to be into self-promotion in ways and to degrees that are just not in my nature. You have to devote a lot of time to non-musical work just to get the cd made and known and to those who want it, and you have to have the ego to push your work.

So the rock bottom answer is most honestly probably this: The part I'm good at and enjoy is actually making the music, even including the finishing work. But I completely bog down when it comes to making the music into a "product" and then promoting and selling it. The idea of dealing with that whole stage of getting cds out is just about enough to make me not want to do music at all, except that I really do love and enjoy it.

The software revolution in electronic music has certainly meant that more non-(classically) trained musicins are producing and releasing their work. Do you feel this is an interesting development or cause for concern?
This was one of the greatest pleasures of Music Mouse, seeing it enable so many people who loved and wished they could play and compose music find themselves suddenly able to do so regardless of previous musical education or physical ability. It's absolutely wonderful that many more people are now able to make music actively themselves instead of only being able to passively listen.

At the same time I do feel that something of the rareness and preciousness of musical experience has been lost now that music is ubiquitous in all environments and its transitory nature has been largely lost. Instead, we now have a surplus, and the rare commodity for which demand far outstrips supply is no longer the music but the listening ear to experience it that is not otherwise too busy.

It is taking quite awhile for the full degree to materialize of how electronic and digital media call into question so many previously rock-solid assumptions about music. In the Jan. 1992 issue of Electronic Musician I wrote about this in more depth than I can go into here, so your readers might want to check it out on my website here: It starts:

    "Once upon a time almost everyone made music. Households were self-sufficient, making their own food and clothing too. People were generalists, doing their personal best at each thing. People sang or played at whatever their level of skill...". (I'll paraphrase what follows.) Then ensued the several centuries of musical specialization from which we are just now emerging. During that long period a dominant elite class of revered musician experts made everyone else feel like it would be a mistake to even try, and music schools often viewed their purpose as being to convince as many students as possible to give up and drop out.

    That essay ends "Despite music home made at anyone's own level, there will always be artists great enough to make us want to sit passively as listeners". It's natural that if more people make music there will be more really strong music made.

So I see much more positive than downside. But the problem of competition becomes more extreme for both creator and listener, with more music competing for listeners and so much music to have to sort through to find what you really want to be listening to. When each household still made its own music several centuries ago and in non-media-served cultures, people were not expected or expecting to be conversant with the music of zillions of other people and places. Life was still lived in a much more local culture. For our global culture there is the problem of information overload, and music is just another kind of information.

What I do find a cause for concern, or at least it seems to be much more rare with all of this otherwise generally wonderful technology, results from the degree to which people now tend to start with a supplied sound, usually a loop, and then build in reaction to what they are hearing. There is absolutely nothing wrong with doing this of course. But I fear that music comes from inside us much less often than used to be the case. The experience of listening to your own mind in silence and forming, clarifying, holding in your mind a spontaneous evolving sonic vision, of listening to your own personal musical imagination as it processes what you are feeling within yourself, this is being drowned out of our musical culture by the ways the purveyors of today's music tech make it as easy as they can to start a new piece and keep it going so that you'll buy more of their stuff. I am hearing more and more editing, selection and juxtaposition in the new music I run across, and less of what feels like genuine self-expression from within the individual, though I certainly do hear that at times in new works too.

By Tobias Fischer 

Many thanks to "Beat Magazine" and, of course, to Laurie

The Expanding Universe (1980) Philo
Unseen WorldS (1991) Scarlet
Obsolete Systems (2001) Electronic Music Foundation
Harmonices Mundi (2003) Table of the Elements

Laurie Spiegel


Related articles

15 Questions to Frances White
If, next to an obsession ...
Giya Kancheli: Little Imber on ECM
ECM Records are honouring the ...
Peteris Vasks: Composer and Sinfonietta Riga deplore destruction
The star of Latvian composer ...
CD Feature/ Jonty Harrison: "Environs"
A familar ring: 21st century ...
15 Questions to Maile Colbert
Nietzsche may have had his ...
Gabriel Prokofiev: Concerto for Turntable rocks the RSNO
The grandson of one of ...
Imagining the Future
Complete control: Jerry Gerber writes ...
15 Questions to Johannes Maria Staud
Contemporary composers often have to ...
15 Questions to Pedro Carneiro
When "Improbable Transgressions" was released ...
CD Feature/ Keith Kramer: "Casual Dualism"
Repeated rhythmical patterns & the ...
CD Feature/ Prism Quartet: "William Albright: Music for Saxophones"
Towards a satisfying explanation: Poking ...
CD Feature/ Erik Satie & Mike Svoboda: "Phonom├ętrie"
Gradually isolates certain aspects of ...

Partner sites