"The Expanding Universe", Notes by Laurie Spiegel

"The Expanding Universe", Notes by Laurie Spiegel

pictured above, Laurie Spiegel at GROOVE's digital console, photo by Emmanuel Ghent

About These Recordings

In preparing these recordings after transferring them to computer from the original reel-to-reel tapes, I decided to err on the side of doing less audio signal clean-up instead of more. While it would have been possible to optimize parts of the signal further in quite a few places, it is less effective to try to restore audio components, once removed, than to leave them there. As audio technology continues to evolve in the future there are likely to be such fine tools for optimizing the cleaning up of sound recordings that today’s techniques will seem as primitive as the splicing block and razor blade now seem to us. So I have intentionally left in bits of tape hiss, distortion or buzz from a leaking sampling rate oscillator, to minimize any compromise of the more desired sonic content. In the future perhaps someone will do an ultimate clean-up of those minor artifacts. My apologies if these bother you. Hopefully the music will carry you past them with hardly a glance.

Context and Tech Overview

To set up some context for you to get an understanding of the limits and nature of the technology we had available for use at Bell Labs during the period when I composed all these pieces, here is what we had to work with:

Room 2D-506 at Bell Labs, Murray Hill contained the computer console, where we did most of our work. There was a video display that showed the time-varying functions we composed. All music in GROOVE was represented in digital memory as abstract functions of time, parallel series of point pairs, each point being an instant in time and an instantaneous value. The sampling rate for these functions, which would be used mostly as control voltages, was clocked by a big old-fashioned analog oscillator that was usually set to 100 Hertz, each cycle of the oscillator pulsing one run through the code, the computer reading all of the real time input devices and playing of all of the samples at that time point in each of the time functions. (I would set it to 60 Hz when sync’ing with video.)

There was a music keyboard with 3 octaves of piano-style keys that was really just a bank of on/off switches (no sensitivity to how the keys were touched). There was a large 3-D joystick, housed in a floor-standing red box from which a rod with a knob protruded that could be moved around and would remain at any position, the result of a clever assemblage of what looked like bicycle chains and counterweights. We had a small box with 4 knobs, 4 set switches (toggles that stay where you put them) and 2 momentary-contact push buttons on it. We also had a little keypad of 3x4 buttons, suspiciously similar to a normal Touch Tone phone’s keypad. There was a then-state-of-the-art computer keyboard for text entry with an alphanumeric visual display on which we could see what we were typing, and a card reader we often still used for entering data, for example In situations where we might want to reenter the data repeatedly, despite the card punch machines being a good walk down the hall and a couple flights down the stairs away if we needed to change what a card encoded.

Through a glass window we could see the room-sized DDP-224 computer, which had to be kept at 58 degrees or cooler. Its console had been mounted through the wall next to the window, so we could control it. It was one of those wonderful old consoles where every bit in every register was displayed and manipulatable as a push-button-containing light bulb that turned on when the bit was set to 1 and went dark when the bit contained 0. Pushing the button toggled that bit on or off, so if you got caught in an infinite loop, for example, you could just enter a halt instruction at a memory location in the data register that stored the loop counter and then change the data register’s data to escape from the loop. Obviously when one or more of those light bulbs burned out it could get very confusing. You wanted to fix things while they ran because you didn’t want to have to reboot this computer any more often than absolutely necessary. Rebooting was such a long complex process that I don’t think any of us ever memorized the whole sequence of actions no matter how many times we had done it. You would have to press the little light bulb buttons to turn the bits on and off to enter an opcode and operand and put data in the various registers and then manually execute that instruction for each step of the whole boot-strap process.

GROOVE's digital magnetic tape drive.


Down a long hallway from the computer room that contained the above was the analog room, Max Mathew’s lab, room 2D-562. That room was connected to the computer room by a group of trunk cables, each about 300 feet long, that carried the digital output of the computer to the analog equipment to control it and returned the analog sounds to the computer room so we could hear what we were doing in real time. The analog room contained 3 reel-to-reel 1/4” two-track tape recorders, a set of analog synthesizer modules including voltage-controllable lab oscillators (each about the size of a freestanding shoe box), and various oscillators and filters and voltage-controllable amplifiers that Max Mathews had built or acquired. There was also an anechoic sound booth, meant for recording, but we often took naps there during all-nighters. Max’s workbench would invariably have projects he was working on on it, a new audio filter, a 4-dimensional joystick, experimental circuits for his latest electric violin project, that kind of stuff.

Because of the distance between the 2 rooms that comprised the GROOVE digital-analog-hybrid system, it was never possible to have hands-on access to any analog synthesis equipment while running the computer and interacting with its input devices. The computer sent data for 14 control voltages down to the analog lab over 14 of the long trunk lines. After running it through 14 digital-to-analog converters (which we each somehow chose to calibrate differently), we would set up a patch in the analog room’s patch bay, then go back to the computer room and the software we wrote would send data down the cables to the analog room to be used in the analog patch. Many many long walks between those two rooms were typically part of the process of developing a new patch that integrated well with the controlling computer software we were writing.

So how was it possible to record a piece with those rooms so far apart? We were able to store the time functions we computed on an incredibly state-of-the-art washing-machine-sized disk drive that could hold up to a whopping 2,400,000 words of computer data, and to store even more data on a 75 ips computer tape drive. When ready to record, we could walk down and disconnect the sampling rate oscillator at the analog lab end, walk back and start the playback of the time functions in the computer room, then go back to the analog lab, get our reel-to-reel deck physically patched in, threaded or rewound, put into record mode and started running. Then we’d reconnect the sampling rate oscillator, which would start the time functions actually playing back from the disk drive in the other room, and then the piece would be recorded onto audio tape.

We didn’t have a lot of the structures that are assumed today. There was no concept of an “instrument” in the sense of a single or polyphonic voice. There was no such thing as an envelope generator. To make an amplitude envelope you would write a FORTRAN IV routine that would compute a time function and send it to the analog lab as one of the 14 control voltages and patch it to a voltage-controlled amplifier. So the 14 control lines got used up pretty fast, a major limitation of the system. The 14 trunk lines and DACs were typically used as either 5 frequency (pitch) controls and 5 amplitude controls (including envelope generation, overall levels, fades etc., one amplitude line for each pitch line, totaling 10), leaving the remaining 4 lines available for controlling the amount of reverb, a global filter cut-off and Q, and maybe sending a sharp click (on/off bit toggle) to pulse one of the beautiful resonant filters that Max built a series of. I often patched the system with the 14 control lines used as 4 frequencies, 4 amplitudes, 4 filter cut-offs, reverb and filter Q. That used up all of the 14 control lines. Timbral control was by subtractive synthesis only, by patching sawtooth oscillators through low pass filters and plate reverb.

The Control Data Corp. DDP-224 Computer that this music was made on.


As to the computer itself, it was a 1965
DDP-224 General Purpose Computer from “3C” (Computer Control Company) that came standard with 4096 (4k) of 24-bit words of ferrite core memory, but had been expanded to 32k words, installed in banks of 8k of core, each bank the size of a filing cabinet. The memory was so stable it was possible to remove a core memory board, walk down the hall to 2D-520 (an identical computer set up for computer graphics instead of audio), plug it in and then use the data stored on the board. The identical computer in room 520 had a Rand tablet (an early drawing surface with stylus) for input instead of the music-style keyboard, but most of the rest of the I/O was the same, except that a rudimentary frame buffer had been set up that could be output to a 16 mm movie camera. That was the room where I did my computer graphics and animations and the VAMPIRE system (“Video And Music Program for Interactive Realtime Exploration/Experimentation”).

Hands-on computing: DDP-224 control panel with lightbulb pushbuttons to toggle bits in the registers

Hands-on computing: DDP-224 control panel with lightbulb pushbuttons to toggle bits in the registers .


This 32k DDP understood FORTRAN IV and DAP II 24-bit assembly language. It could do an integer add in as little as 3.8 microseconds but a floating point multiply could take up to 115.9 microseconds. You can bet we all wrote the tightest smallest fastest code we could. GROOVE could compute up to 200 functions of time but because we only had 14 channels of usable output, we rarely needed a small fraction of the 200, although in addition to values to output to the DACs, we used time functions to store internal computational variables used in computing the music.

There was nothing like what we now know as an “operating system” or “user interface” other than typing in code and whatever use of the various input devices we each programmed in our code, but there was a file management system that was designed to negotiate data transfer between the computer and the data tape and disk drives, as well as to the Fraser Digital Data Loop (a.k.a. “the spider) which allowed printing via a remote IBM 360 computer located elsewhere. We used a text editor called SLED (“Simple Little Editor”) that was similar to QED but much smaller and less powerful.

The spectacular thing was the GROOVE system itself, the whole system of software and hardware put together in the late 1960s by Max Mathews and F. R. (Dick) Moore that provided a unique infrastructure we could use to read input devices, store and access the functions of time, and output data to analog devices. The whole design of the system altogether was unique, ambitious in its generality and power, and though it was not easy to use, I found it unbelievably useful.

The reason that I was attracted to this (amazing kludge of a) system was that I had been working for some time with analog modular instruments (Buchla, Electrocomp, Moog and Ionic/Putney) and had become frustrated with the relative simplicity of their control logic and their complete lack of memory storage. Realtime interaction with sound and interactive sonic processes were major factors that I had fallen in love with in electronic music (as well as the sounds themselves of course), so non-realtime computer music didn’t attract me. The digital audio medium had both of the characteristics I so much wanted, But it was not yet possible to do much at all in real time with digital sound. People using Max’s Music V were inputting their data, leaving the computer running over the weekend, and coming back Monday to get their 30 seconds of audio out of the buffer. I just didn’t want to work that way.

But GROOVE was different. It was exactly what I was looking for. Instead of calculating actual audio signal, GROOVE calculated only control voltage data, a much lighter computational load. That the computer was not responsible for creating the audio signal made it possible for a person to interact with arbitrarily complex computer-software-based logic in real time while listening to the actual musical output. And it was possible to save both the software and the computed time functions to disk and resume work where we left off, instead of having to start all over from scratch every time or being limited to analog tape editing techniques ex post facto of creating the sounds in a locked state on tape.

What GROOVE had that no analogue synth did back then was a bunch of physical input devices, a way to hear sound, and in between, a programmable general purpose computer. We could write computer programs that connected the numerical values that came into the computer from the input hardware to the output lines. And we were not limited to any specific way of connect them. We could do whatever we wanted with the incoming values, from using them directly to writing whatever ways to use and interpret those numbers we could think of. Those transfer functions, the logic we wrote that sat between input and output, was the heart of the power and freedom of the system. And it was in that nexus that composing was free to take new forms.

The trade-off for being able to do these things was that the GROOVE system had extremely limited orchestrational variety or timbral control. I had available up to 5 sawtooth oscillators, voltage-controlled amplifiers, and filters plus one reverb and a mixer, but with only 14 control lines, that was already more variables than the system could control. (I also had the ability to create pitched percussion sounds by sending a sharp transient to one of Max’s analog filters if the filter Q were set high enough that it would resonate, and the ability to do this in multiple channels as in “Clockworks” and “Drums”.) Because of the limitations in timbre, it seemed to me that creativity could most productively manifest itself in the domains of such parameters as pitch, motive, rhythm, counterpoint, harmony and in the design of the processes by which a person could interact with these and other aspects of musical material.

For all pieces on this album, all envelopes (the amplitude contours of all individual notes), the stereo placement and the pitches were computed in real time for each voice. Above the level of mere parameters of sound were more abstract variables, probability curves, number sequence generators, ordered arrays, specified period function generators, and other such musical parameters as were not, at the time, available to composers on any other means of making music in real time.

Along with the direct control parameters for the analog hardware, such abstract variables could also be stored as sampled time functions on the DDP 224’s disk pack and on digital tape. Because GROOVE was a system for creating and editing functions of time in the abstract, it did not presume the division of sound into events, nor the use of point pair series nor the use of any kind or source of data in any particular way. Number was to the computer what voltage had been to the analog synth, but with numbers, all the musical potential of logic and math opened up.


Notes on the Pieces


Patchwork (Dec 1974 / Apr 1975 / Mar 1976)

The 4-voice piece”Patchwork” consists of relationships among four short melodic motives and four rhythmic patterns. In the interactive program I wrote to compose this piece, I programmed the computer to read the buttons on a standard telephone-style 4-by-4 keypad so that the keys switched between 4 melodic motives, 4 rhythms (4 different sequences of 4 loudness levels) and to toggle some standard compositional manipulations on and off (retrograde, inversion, augmentation, and diminution), while other computer input devices allowed me to control reverb mix, low pass filtration and envelope decay rate. I composed each of the 4 voices in the piece independently, laying down the set of time functions that define it one voice at a time as a pitch-and-amplitude pair. This multitrack-like method was unusual for me, as I almost always through-compose, developing all voices simultaneously from the start to the end of a piece.

“Patchwork” was inspired by the structure of the isorhythmic motet and the spirit and modality of banjo music. During the period when I composed it, the post-Webernite atonal pointilist aesthetic was dominant in contemporary concert music. So I composed this piece in reaction against an overdose of arhythmic, non-melodic, often academic sounding, overly cognitive “contemporary music”, and against the way it often feels to get up in the morning.

Pentachrome (1974)

Max Mathews had built several wonderful resonant filters with voltage-controlled cut-off frequency and Q that could be triggered to oscillate by a sharp transient pulse, an easy form of signal for a computer to produce. For the 5 voices in this piece, I used 10 of the 14 control voltages for the amplitudes and pitches of the 5 sustained tones. The remaining 4 control lines gave me reverb mix, overall loudness, and the frequency cut-off and Q of one of Max’s filters. I produced the bursts of notes by having the computer hold back what it calculated that it would have been playing in steady rhythm had I not thrown a toggle switch, in which state it would store up the series of notes it computed, so that when I threw the switch it released the stored series of pulses in a burst, a very rapid stream. As in most of these pieces, I used 4 amplitude levels for both the sustained and percussive sounds.


Old Wave (July 1975)

This was a first sketch for the first movement of my ballet score “Waves”, commissioned during the summer of 1975 by the American Dance Festival for the Kathryn Posin Dance Company, for performance with instrumental ensemble at the American Dance Festival. The full 19 minute ballet is scored for 9 instruments and computer generated electronic tape, and takes the form of a rondo, alternating 4 electronic sections with 3 instrumental insets. Per the title, an organic undemarcated feeling was wanted, so I made the electronic sections in such a way as to minimize any sense of metric constancy. The four “wave” sections contain, in fragmented form, the musical materials out of which the “duet” sections were constructed. Each successive “duet” was to feel more childlike in its musical material than the preceding one, early childhood - primal and innocent - musical reminiscences reformulating themselves briefly before they are washed away again in a continuing sea of fragmented experience, waves washing over each other as a primordial sea of not-yet-formed and already-dissolved experience.


A Folk Study (February 1975)

I composed “A Folk Study” while trying to come up with music to use as the theme and credits music for “VTR - Video and Television Review”, the weekly experimental video tv series from the TV lab at WNET, Channel 13 in New York City. Although I was a video Artist in Residence there, I ended up mostly doing soundtracks for other artists’ videos instead of making video works of my own. This little folk fanfare is a nod not only to my love of old time music but included here as an acknowledgement the willingness of Philo and Rounder Records, two folk labels, to carry this music on their labels when none of the “new music” labels I tried were interested.


Drums (March 1975)

"Drums”, a polyrhythmic work composed in 1975, reflects my interests in African
and Indian musics. I created several channels of amplitude-controlled pulses in time, and connected these pulse outputs to a 4 fixed-frequency resonant analog filters that Max Mathews had built. This was a variant use of the same software human interface I also used to create “Patchwork” and several other pieces.


Appalachian Grove I (June 1974)
Appalachian Grove II
Appalachian Grove III

I composed this set of three movements during May of 1974, when I had been studying computers for nearly a year but had not yet done my first piece using one. At that time, I had a research fellowship from the Institute for Studies in American Music, and was studying American music with H. Wiley Hitchcock. I had just gotten back to New York from the first of my trips to the Blue Ridge Mountains of western North Carolina, carrying my old Bacon Belmont banjo, a Uher portable reel-to-reel tape recorder and a sleeping bag, in search of ancient mountain modal music. While composed under the influence of the rhythms, modes, and energies of that music, these pieces are probably best placed in that large category of composed music which distorts or alters more than it embodies folk material. Still, as with much of my music, both my earlier folk music roots and the practice of improvisation make the piece what it is.

Late one night in a field behind an old barn, the fiddles, dulcimers, and banjos of old mountain families who had gathered from hundreds of miles around, played timeless modal Celtic tunes hypnotically over and over while the full moon went behind our earth’s shadow in a magical total eclipse, and then slowly emerged to shine again. I could not help feeling that the same music must have conjured similar eclipses many centuries back into the past, thousands of miles away, among the druids.

In figuring out the computer logic I used to compose this series of movements I realized that it isn’t enough to calculate a melodic or harmonic progression as a series of pitches scheduled in time. These aspects of music depend on each other but also, importantly, on their placement relative to perceived rhythmic meter. Even if there is no actual or explicit metric pattern and the beat structure is not cyclic, the mind will perceive beat groups and feel the notes as being stressed or recessive beats. As to harmony (“A Harmonic Algorithm” being an exception) I have not tended to use chord progressions as a basis for evolving musical material forward nor tended to focus on harmonic or chordal progression as a basis for deriving melody, but conversely: harmony tends to turn up as a byproduct of melodic line, or sometimes of the intersection of multiple lines, the lines being primary.

“Appalachian Grove” was the first piece I completed using a computer. It embodies two breakthroughs of the early 1970s: composers breaking our art free from the post-Webernite atonal aesthetic, and the first use of digital logic systems for realtime interactive music-making.

I wrote this on the back of the 7” tape box at the time I composed this:

• Ap Grove I “From greatest simplicity grows the greatest wonderment.”
• Ap Grove II “The primordial musical sound, cyclic eclipsed, cyclicly reemerges.”
• Ap Grove III “Whenever any one man plays, the entirety of music is reborn.”


The Expanding Universe (Feb-March 1975)

Although the term “minimalist”, borrowed from the visual arts, was commonly used for the style of the slowly changing music that began to gain prominence in the early 1970s (think Terry Riley, Steve Reich, Phil Glass), and I have often been grouped with them in being called “minimalist”, I differentiate “minimalist” music from what we used to refer to as “slow change music”. The latter, represented here by the title work of this album, instead of being built by accretion of many individual musical events to form a texture, works by allowing the listener to go deeper and deeper inside of a single sustained texture or tone.

The aesthetic aim is to provide sufficiently supportive continuity that the ear can relax its filters, no longer on guard against sudden change, which so much of today’s ambient sound and much music puts our sensitive ears on hair trigger to safeguard against. The violence of sonic disruption, disjunction, discontinuity and sudden change desensitizes the listener and pushes us away so we are no longer open to the subtlest sounds. But with continuity and gentleness, the ear becomes increasingly re-sensitized to more and more subtle auditory phenomena within the sound that immerses us. Instead of being swept along, as with cascades of many running notes in suddenly-changing blocks of time, such as “minimalist” music so often consists of, we open up our ears more and more to the more minute phenomena that envelope us. This is also not “ambient music”, a term that came into use some years later. This is music for concentrated attention, a through-composed musical experience, though of course it also can be background.

Technically, the title track of the Philo LP was one of the longest single tracks to be cut onto vinyl at the time. So putting this work out on an LP, the only recorded music distribution medium in use then (before cassette, 8-track, cd or internet), this track faces unusual technical problems. The longer the music recorded on a side of a vinyl LP is, the narrower and closer together the vinyl grooves have to be cut and the worse the sound quality tends to become. A certain amount of LP surface noise works itself into the piece really well, aesthetically, but it was not clear at the time when Philo pressed the original LP if we’d be able to pull off a good-sounding record with the full dynamic range of this piece without the needle jumping the tracks. The thing is to get the exact best balance, for the cutter, specific vinyl material that will be used, etc., between it being a hair too loud so the loud grooves are too wide and the needle skips, and it being too quiet so the signal-to-noise ratio is too large if the amp is turned up to a proper listening level.


East River Dawn (September 1976)

This work used basically the same interactive logic as Patchwork, but with more highly developed computer controlled filtration and reverb, and I must admit to some overdubbing and analog post-processing to produce the richness of texture I wanted. It was choreographed by David Woodberry for a beautiful outdoor dance concert in Union Square Park in New York, blaring out over big speakers on a sunny summer day.

As to my inspiration for this work, the title tells it. Picture coming to the East River’s edge, with the breath-taking sense of spaciousness, light and energy there compared to the dense crowded rectangular small spaces we normally inhabit in New York City. Picture the feeling of having stayed up all night on the Lower East Side before it became known as the “East Village”, then looking out through the hazy dawn air at the river. Things are already busy. There are tug boats and freighters, a Fire Department boat, and seagulls flying around looking for their breakfasts. The day feels full of potential.


The Unquestioned Answer (June 1974)

This work is structured by a first increasing then decreasing probability that notes in the basic melodic cycle will be replaced by the computer with notes I played on a keyboard. This piece used fundamentally the same logic that I wrote to make “Appalachian Grove”, and which later evolved into the basis for the logic I used to make “The Orient Express”. In it, each beat of the rhythmic cycles is weighted for loudness according to its position in the metric cycle (a recursive hierarchy of strong beats with quieter weaker beats between them). Pitches from the predetermined fixed pitch melodic cycle are replaced with pitches from a second pitch array to varying degrees depending on the knob-controlled variables that I adjusted while listening to the algorithm’s output.

A more in-depth explanation can be found in my paper “An Information Theory Based Compositional Model” in Leonardo Music Journal, Vol. 7, 1998, MIT Press, online.

My dogs found this piece very relaxing even though they had no idea that information theory was one of the major innovative contributions to come out of Bell Labs.


The Orient Express (June-July 1974)

I figured that every improvising musician, like many who played blues, bluegrass or jazz, should do at least one “train song”.

I thought of this particular train because during one of the long winter breaks between terms at Oxford, four of us students decided to go to the Gare du Nord in Paris and to take whatever train would pull out next. It happened to be the Orient Express bound for Istanbul. The several-day ride was both fascinating and grueling, introducing us to many people with whom we spent extended time, bringing us awesomely rhythmic Bulgarian folk dance music at some stops and scary border guards at others.

In this somewhat-programmatic work, the train moves between colors of harmony and ways to make music, alluding to various grassroots ethnic musics and emotional moods, though overall the feeling is of continuing forward motion and of moment to moment attention and discovery.

The illusion of perpetual acceleration heard during the first several minutes came from Dr. Kenneth Knowlton of Bell Labs and we worked out the code to be able to hear it together. It is essentially a rhythmic analog to Roger Shepard’s ever-rising pitch (a.k.a. “Shepard Tones”). The effect is achieved by gradually decreasing the amplitude of the weak beats of a rhythmic cycle until, when double the original tempo is reached, those weak beats have decreased to silence. At that point those beats drop out and a new process of decreasing a new set of alternate weak beats begins. Both the speed of apparent acceleration (or deceleration) and the base tempo at which it occurs were controllable by turning knobs.

As to the pitches which ride that rhythmic pattern, the idea was based on constrained random corruption of repeating melodic cycles. I had fine-grain real-time knob control of the probability weightings that pitches from different pitch sets would be introduced by the pitch selection algorithm to evolve the melodic material forward. At a higher level of conceptualization, what I was doing was to literally draw a classic Shannon informational entropy curve in real time. To my mind, entropy may well be the most powerful underexplored variable in all of music, the most general highest level variable, the one by which music is made to feel really alive. (Thank you John Pierce for at least starting us in that neglected direction.)

On a more mundane level, I calculated stereo placement based on current channel density, to ensure a roughly equal distribution of notes left and right.

Notes I wrote on the box:

“Every good jazz and blues or bluegrass player does at least one train song. Here is mine. The train accelerates, and travels with all its intensity through many places. At first they all look pretty much alike because one’s attention is taken by the train itself, but at some point one becomes transfixed by the foreign and beautiful territories as they rush by. The train, however, goes right on by them, one’s only constant.”

“Realized on BTL GROOVE (Knowlton-Spiegel) system.” (Knowlton-Spiegel perpetual acceleration algorithm.)


Clockworks (March 1975)

This piece is dedicated to the many mechanical pocket watches and alarm clocks I have known, and above all to the Carfax clock tower in Oxford.


Dirge Part I (October 1974)
Dirge Part II

Same software logic as Patchwork, different data to express different feelings.


Music for Dance part I (March 1975)
Music for Dance part II

Originally commissioned by video artist Doris Chase for her dance video “Dance Eleven” in 1975, with Cynthia Anderson of the Joffrey Ballet, “Pentachrome” was rechoreographed and performed in 1982 by Muna Tseng and Dancers. As is the case for many works composed as accompaniment, this piece would be more powerful when experienced theatrically with choreography, but I thought it musically strong enough to stand on its own as a listening experience without dance. This used roughly the same FORTRAN IV program as “Pentachrome”.

The first movement is relatively classical, a large ABA form with a continuous tone accompaniment to what might be mentally pictured as a percussion soloist, who doubles the melodic line that functions as the solo line’s own orchestration.


Kepler’s Harmony of the Worlds (Feb-May 1977)

Known variously as “Harmonices Mundi”, “Harmonia Mundi”, “Harmony of the Planets” or “Music of the Spheres” (this last erroneously because Kepler labored so long and hard to establish that the orbits of the planets were not circular but elliptical), this work is my realization of Johannes Kepler’s idea, published by him in 1618-1619. Kepler’s vision, and the relationships he worked out, were to make audible to humans, as music, the frequencies of our solar system’s planetary motions, a music that he hypothesized if it were to be heard would otherwise only be audible to the ear of God. This realization was used as the first cut on the golden “Sounds of Earth” record on board the Voyager spacecraft (see “Murmurs of Earth” by Carl Sagan).

I have chosen the variant title used here to embody that to Kepler this work might not have represented a simple sonification of astronomical data. I speculate that Kepler looked for deeper meanings and implications, as would have been natural to one who at times was employed as an astrologer and whose mother was nearly burned as a witch. I think that Kepler just might have envisioned a truly universal harmony among diverse living beings as well. In Chapter 10. “Epilogue ... by way of Conjecture” Kepler makes a case for the existence of life on other worlds besides the Earth. [Harmonies of the World, by Johannes Kepler, tr. Charles Glenn Wallis [1939], pp. 1084-5.]

This piece is a straightforward realization of Johannes Kepler’s idea based on astronomical data. The mapping of planetary data to frequency in this excerpt (as the piece is potentially infinite in duration), shows the continuous time function model fundamental to GROOVE’s design very clearly. The selection heard here reproduces the functions starting on 0 January, 1977 and running forward through time from that date at the human-perceived rate of 20 seconds per year.


Wandering in Our Times (March-April 1975)

It was still an accomplishment in those days to get analog oscillators to get into and stay in a proper music scale tuning. So why on earth and whatever possessed me to wax all microtonal like this? The scale that I tried in this piece (or really, this texture, because these sounds did not seem to want externally imposed structuring) was at a resolution of 64 equal divisions per octave. The intermittent thunder-like sound that appears from time to time, insinuating that within this sonic space there may be things happening, a landscape, even beings, is the wonderful artifact that no digital reverb provides us any more - the ability to tap or bang on a metal-based reverb unit’s housing to create audio effects.

Liner Notes from the original 1980 Philo/Rounder Records LP of The Expanding Universe

LS: How would you describe your music?
LS: I wouldn't. People often ask me to do that, and it seems impossible. Music isn't verbal or conceptual. I try to get as close as I can to certain qualities, and I've found these in a variety of styles. I have also found they don't require any known styles.

LS: Well, if you won't describe your music, what's it for?
LS: This music is for listening, though I sometimes write music which is for the enjoyment of playing, instead, usually for piano or guitar.

LS: When I asked that, I meant what instrument is it for?
LS: It's composed specially for record players, and I made it on a computer.

LS: Then you've answered my first question, after all. It's electronic music.
LS: That's true, but that isn't a description of the music, so I still haven't answered your question. Electronics aren't a style or a kind of music any more than a piano is. They're a way of making sounds.

LS: You're being pretty evasive about what your music is like. Will it help to ask in what school of composition were you educated?
LS: A lot of people helped me learn. John Duarte, with whom I studied classic guitar in London, was the first person to encourage my composing and teach me some theory and counterpoint. When I told him I'd been writing music down a bit, he said, in that case, I was a composer, and if I wanted to become proficient at composing, I should practice by writing a piece every day, whatever I liked, no matter how short or simple, just like practicing the guitar. I did my best to comply. Writing every day turned out to be good training for professional composing, as composers have to be able to create music fast, for deadlines. Composing is active, not passive. You can't wait for inspiration. Later, at Juilliard, I was shocked at how students were allowed to work on a single piece all year, while I was paying my tuition by composing an educational filmstrip soundtrack every month.

LS: Who else did you study with?
LS: Aside from my main and most important teacher, Jacob Druckman, who also took me as his assistant and to whom I owe a lot, those who taught me the most include Michael Czajkowski who taught me to use the Buchla synthesizer in what was left of Mort Subotnick's studio at NYU, and Vincent Persichetti, and Hall Overton who each took time to sandwich into their busy schedules a free 5 minute lesson here and there. Max Mathews enabled me to have access to computers and to learn to use them for music. From Emmanuel Ghent I learned some very important ideas about the use of computers in composition. After I'd been classicized (I didn't start out in classical music), Wiley Hitchcock gave me a fellowship at the Institute for Studies in American Music, and helped me get back in touch with my non-classical musical roots, to remember who I am and am not. I learned different things from each of them, and from other people, but the most important thing they did in common was that they encouraged me to be myself and to keep going. I never really felt at home in the kind of conservatory atmosphere Juilliard generated.

LS: Not at home in what way?
LS: Conservatory students tend to be very young, and are too often there because they've been outstanding at something rather than because they love it and want to learn all they can. Having been child virtuosi, they may be snobbish. Or they may have acquired techniques without having anything self-motivated which they wish to use those skills for, and become either excessively concerned with technique itself or overly influenced by other people's ideas. When I was studying, I wasn't very attracted to the atonal, pointilist, or serialist schools of thought, which were still extremely dominant. I wasn't studying composition because I wanted to write like "contemporary" composers, but because I wanted to be more - and more skillfully - involved in music, had already found myself composing, and wanted to find or make more music that I could really feel close to, music like that which I loved best. I also wanted to make music (for example, the meditation piece THE EXPANDING UNIVERSE) which I had envisioned, which should have existed somewhere, but which I hadn't been able to find. I wanted to learn. I was regarded as musically suspect in that child-prodigy-oriented atmosphere, as it was known that I had started composing relatively late, having been an improviser who didn't even learn written notation until age 20. Some people were patronizing, and others just didn't take me seriously. The individuals I mentioned above were among those who were instrumental in keeping me from giving up on music altogether. Others would tell me that I was uneducated if I didn't use a key signature, and then tell me that I was either reactionary or unimaginative if I did use one. What I did have going for me was that I had developed my ear by playing (even if I didn't know the names for what I heard), and I already knew what my musical values were. I knew what I wanted to do and tried hard to learn how. And I did learn a lot there. Many students were hindered by their egos, afraid to admit not knowing something, acting as though anything they didn't know before they got there was unimportant. Still, I was very shy and intimidated by them.

LS: When you say you just improvised earlier, what do you mean?
LS: Though my grandmother gave me her extra mandolin when I was fairly young, I really had been most active as a banjo and guitar player. My sister and I used to sing old tunes in parallel thirds and sixths in the kitchen when we were kids. I loved the old mountain modal tunes best, and some of the shapenote music, but I rarely played anything unaltered. I made things up a lot and never bothered with the words to songs. John Fahey's playing was a revelation to me when I first heard it, and it exerted the strongest influence on me for years, until I discovered Ali Akbar Kahn, Bach (the ultimate), and Shostakovich, Rimsky-Khorsakov, Stravinsky, Schoenberg, Copland, and Dowland, all sort of one right after another (I later switched to the lute). I've always improvised, but at a certain stage, when I was living in a trailer near the Mississippi River, I felt I was in a rut, playing the same things over and over, so I decided to teach myself to read notes. I got the Bach Inventions and tried them on the guitar. The first measure took a whole day, but a year later, I could play several of them. As soon as I could read notes, I started writing them, too. I haven't been in a rut since.

LS: Isn't it rather an extreme switch from banjo, or even lute, to synthesizers and computers?
LS: All media in which you can work directly with the sounds have more in common with any other than with traditional European techniques of working with symbols on paper. Can you imagine painting a picture by having to write a set of instructions for someone else to paint it? But the problem with solo improvisation is that what you can realize is limited by your technique and the nature of your instrument. You can't do anything beyond what you can play yourself, no matter what you hear in your imagination. Non-solo improvisation has other problems, like finding other people with the same musical visions and sensitivities. Also the problems of communication and of who's musical tendencies dominate. I've always been a loner. I have clear ideas of what I want, and I don't want to compromise them in order to hear them. Technology has permitted me to independently realize conceptions I could never play solo, or realize in any other way, and to do this in complete privacy, so that I can experiment and make mistakes, and hear them, and learn more, faster. Traditional scoring doesn't work, economically, politically, or as a technique for learning to make music. I rarely gotten to hear played any of the pieces I've written for instrumental ensembles. And that kind of writing was a major focus of my formal musical education.

LS: Do you think other people will begin to make music on computers for the same reasons?
LS: I don't think it's a coincidence that there seems to be a relatively high percentage of women, and other composers who the musical mainstream might discriminate against, working in electronic media. You gain a lot by being
able to go all the way from idea to playing the piece for people without having to get support from established organizations. I started using computers during a period when it was necessary to have some sort of sponsorship in order to get access to them through large institutions, but at this point, computers are cheap enough for almost anyone, and they're likely to become a grassroots medium capable of great musical sophistication, and accessible to composers who for non-musical reasons, may be unable to get an appointment with a conductor, let alone a performance by one. Much larger numbers of people than before will be able to realize musical conceptions of considerable complexity or subtlety, and in replicable forms.

LS: It seems odd that you speak of computers as a potential grassroots, almost a folk medium. A lot of people find computers and electronics intimidating.
LS: A lot of people find music pretty intimidating, too, you know.

LS: As a matter of fact, many people consider computers as cold and dehumanizing, the opposite of musical.
LS: Computers had a negative and dehumanizing image as long as they were only seen as inaccessible threatening tools of large bureaucratic organizations. They were popularly imbued with the characteristics of those organizations. Now that computers are increasingly becoming the personal tools of ordinary individuals, this image is changing. A major focus of computer development has been making them easier to use, too, developing more human-oriented languages and uses. I was lucky in that when I was 8 or 9 and might have gotten music lessons or a doll, my father gave me a soldering iron instead. I never studied computers or electronics formally. Hundreds of thousands of small computers are out there by now, largely in the hands of people who have also never studied computers, just like the many instruments played by people who never studied music. As yet, there is still not the ease of musical interaction with these little computers which people will need, unless they are as obstinate about getting music out of them as I have tended to be. But ultimately, these little computers will make it easier to compose, as well as to play music. There are far too few people creating their own music compared to the number of people who really love music. It's a much worse ratio than amateur painters or writers to consumers of those media, I suspect, and it's because until now, there has been only a very difficult technique for composing.

LS: Can you explain a bit better that distinction between composing and improvising music? And how computers affect it?
LS: A great advantage of computers is that not only can they be played, like instruments, but they also have memory, like paper, but infinitely more flexible. What computers excel at is the manipulation of patterns of information. Music consists of patterns of sound. One of the computer's greatest strengths is the opportunity it presents to integrate direct interaction with an instrument and its sound with the ability to compose musical experiences much more complex and well designed than can be done live in one take. With a computer, you can record what you improvise in such a way that it can be edited with complete freedom, which isn't true for tape recording. That's the advantage of composing over improvisation which I mentioned before. You just can't do the best work if you are limited to what you can do with your own performance means in the moment as it passes. Could the ART OF THE FUGUE or the B minor MASS have been composed in one take in real time? But many composed pieces do start with sounds spontaneously made up at an instrument and then written down and reworked.

LS: What led you to start using computers?
LS: I was very lucky, in that after I'd been playing with various kinds of analog synthesizers for a few years, and was discouraged by their simplistic patterns of control, the fact that they drift and can't be adjusted finely or the same way twice, so that everything has to be done in one take, I was given access to a system called GROOVE, by Dr. Max Mathews, who has been a pioneer in the use of computers in music, and has developed a variety of important approaches. This record was composed entirely on that computer system.

LS: Since you still haven't revealed much about the music itself, will you at least tell us a bit more about the computer instrument you used to make it?
LS: The computer played the actual sounds by controlling analog synthesis equipment. This was done using the GROOVE hybrid system, which was developed by Max Mathews and F.R. Moore at Bell Labs. GROOVE is an acronym for Generating Realtime Operations On Voltage-controlled Equipment. It's designed for the composition of functions of time. What it did was to permit the creation, storage, editing, and manipulation of a piece of music as pure patterns of change, over time, parameter by parameter. This rather different from conventional musical notation, which records music on paper as descriptions of individual events, one by one.

LS: It sounds pretty abstract to just describe patterns of change.
LS: Actually, playing the sounds was the way I generally "described" them. I used a keyboard, a drawing tablet, pushbuttons and knobs which the computer monitored and recorded, and I wrote complex algorhythms (in FORTRAN) to process the data from these devices and derive from it much more complex music than I actually played. I listened directly to the resultant sounds all the time, which is definitely not abstract. I would enter music by playing or computing it, and then do a lot of editing and revision. I might start with an idea for an "intelligent" instrument and then play it a while, possibly an instrument incorporating a set of rules for melodic evolution. Some of the levels of the music which I "played" were pretty abstract. Even on a banjo, you don't consciously select every note. Sometimes pitches or rhythmic syncopations which you would never have thought of writing on paper get into a tune because of the right hand picking pattern you are using.

LS: What kinds of processes did you explore in these "algorithms?"
LS: I've been very interested in complex instruments on which patterns, rather than individual notes, can be played. I've used a knob to control the degree to which what I was playing at the moment would get intermixed with something I had played earlier, or to gradually expand the range over an increasing number of octaves, as I did in what I'm calling OLD WAVE here, which was the original non-instrumental opening movement of my ballet WAVES. I sometimes used weighted probabilities in evolving melodic lines, both for particular pitches and for the rhythmic beats on which they would appear (stronger or weaker beats), with these weightings changing continuously or at certain times, so that certain notes would dominate in certain sections. Or the computer would make a stereo polyphonic piece out of a single line which was either played by me, generated by it, or created in collaboration. The key to effective use of these very general levels of control lies partly in being able to go in and edit whatever results, changing individual things here and there, having absolute control over all the specifics of the material created, and partly in controlling these very general aspects by hand and by ear, as one would control specific notes in other methods of making music. It will be a long time, if ever, till we know enough about music and perception to automate such things completely.

LS: What was the drawing tablet for?
LS: I was composing functions of time, and these were stored as pure abstract changes, not inherently linked to aspects of sound. What I was composing was general and flexible. I adapted the GROOVE system to control visual material, which was a lot of work, but I was able to compose time structures for visual materials the same way I composed them for music. Instead of pitch, amplitude, timbre, I had location, hue, value, saturation, texture, and the same time-structuring, storage, and editing capabilities. I want to be able to play and compose images in time the same way that I can compose sounds. But that's another story.

LS: Specifically, what did you do by what means on this record?
LS: I hope that some of the enjoyment of listening will be to try to figure this out, so I won't say much more now. It'll all come out eventually.

LS: Then would you, at least, give us another example of something you controlled in a very general way?
LS: OK. In PENTACHROME I used a continuous acceleration. This is nothing new, as it works pretty much the same way as the one in Eliot Carter's VARIATIONS FOR ORCHESTRA. You can accelerate forever if, when you've reached double the tempo, you've dropped out every alternate beat. Ken Knowlton and I had collaborated on a previous computer version of this idea. In PENTACHROME, what I could control with the knobs was the apparent rate of acceleration (the amount of time it took to double the tempo), and the overall tempo at which this happened (the extremes of slow and fast that were cycled between). This was only one of many processes going on in the piece. Stereo placement (voicing) was automated, too, except for the percussion voice, which just doubled the melodic line. I did the timbral changes completely by hand.

LS: When you work on music, you aren't really just thinking about processes, are you?
LS: My pieces are most strongly concerned with feelings, actually, but no matter what I feel, my mind is always active. Every piece is different, and I suspect that every good piece has all the aspects of being human in it which are integrated into its creator, probably in the same balance. Each piece I do reflects what's happening in me at the time I create it. Sometimes a particular idea or emotion will dominate my awareness while I'm working, but the rest of me is still acting on the piece as I work. The intellect is a great source of pleasure, and wants expression just as the emotions do. They are not really separable. PATCHWORK is an example of a piece made to express a light positive energy directly counter to the emotional chaos of most serialism and the introspective heaviness of atonal expressionism. But because I do enjoy structure in music, and love counterpoint, the computer program I wrote for that piece had all Bach's favorite contrapuntal manipulations - retrograde, inversion, augmentation, diminution, transposition - available on switches, knobs, pushbuttons and keys, so that I could manipulate the 4 simple melodic and 4 rhythmic patterns with them in the same way that a player of an instrument manipulates individual tones. (I did edit it a lot, too.) I admire Bach the most of all because he had strong structural concepts, intricate and ingenious, but was always full of emotion, imagination, physicality, spirit, and a never ending stream of new and different ideas. I want to put as many aspects of myself into music as I can too, as much as possible of being alive, intensely conscious on all levels. This record, of course, only represents one period I went through, only explores a certain range of feelings, concepts, materials.

LS: You have referred almost entirely to folk and non-contemporary composers. Why are you so often described as avant garde? How do you regard yourself relative to others who are described as avant garde?
LS: I'm thought of as "avant garde" partly because I use new media and techniques which have yet to come into common use, though I think they will, partly because this music seems to actually be different, and partly because I'm one of the composers who've tried to bring back greater continuity and accessibility to composed music of purely aesthetic (non-commercial) orientation that has been common in recent decades. Each piece has some clear one-time-only concept which I wanted to hear and hadn't found already composed. Relative to some of my colleagues, I have tended to use more continuity, less literal repetition, not to depend on structures which had to be studied to be heard. I suppose the rates of change within and between my pieces are about halfway between the atonalists and the minimalists. I've tried to find a balance between predetermination and spontaneity, and the compose simple materials into complex relationships. I like to find relationships among things which are not obviously related, such as scientific and artistic methods and tools, or classical, folk, and ethnic musics, or images and sounds.

LS: It looks like you've finally been tricked into beginning to describe your music afterall. Would you be more specific about these individual pieces?
LS: Wouldn't it just be a lot easier for you to listen to this record?