A composer looks at computers
COMPUTERS are now being used to substitute for traditional methods in making music, in all the steps from inception to the listener's ear. Probably the hardest part of this for us to imagine is a computer taking over the job of composing the music. When we think in the abstract about a computer writing music, it seems spooky because we're unsure about how we ourselves do creative things. There's no mystery about how the computer solves this or any other problem, though.
It is stupendously uncreative.
To solve a problem using a computer, one first designs an algorithm, which is a step by step strategy for solving a problem. Computer programs that compose music use algorithms that combine the author's set of musical ``rules'' and the computer's ability to simulate the generation of random numbers.
To imagine this, it's useful to think of composition as a game of picking out notes where possibilities are first limited by what's come before, and then a final choice is made randomly. For instance, if the previous two notes moved a long distance, then the next one must either move a short distance or stay the same, and we choose by ``throwing the dice'' with the computer's random number generator.
These programs can do a credible job using variations of species counterpoint, for instance, which embody the traditional systematic rules governing the relationship of different musical voices codified in the 18th century.
As the rules are loosened up, the result is increasingly ``aleatory,'' meaning that the music is based more and more on chance. Programs that make new music in this way are interesting, but not very popular.
What are popular, and beginning to proliferate among musicians, are ``sequencer'' and ``sampling'' systems. These are combinations of hardware and software that don't compose music, but that hook up personal computers and synthesizers to create, record, modify, and control musical sounds, performances, and compositions, and to print out scores and parts.
The computer in this case is substituting for the pen and paper, the instruments, the musicians, and the recording studio; everything but the composers themselves.
In order to imagine that a machine can do anything with music, it is useful to think about what music is made of.
Since music happens in time, we can speak of its basic building block as being an event. Just as letters build words that build phrases that build statements etc., musical events (notes or rests) build melodies, songs, and so on.
Any event has primarily four characteristics of interest: pitch, timbre, duration, and loudness. With these parameters (variables or descriptive categories that define the events), we can tell what note it is, what it sounds like, how long it lasts, and how loud it is. The computer stores this information as a range of discrete numerical values that can be converted into voltages that are then converted to sound in the familiar way by amplifiers and speakers.
A revolutionary part of the new technology useful to composers and producers is the technique of digital sampling. Although in its infancy, sampling already can give musicians the kind of control over sounds that has to be heard to be believed.
Sequencer software accomplishes the digital recording of a composition or a performance. Sampling makes a digital recording of the sound itself.
Once the sound is recorded, it can be manipulated in a number of ways, the most useful currently being to alter the pitch and duration.
This means that we can play, throughout seven octaves of the keyboard, with our hands or with our sequencer, anything from a dog barking to the actual sound of someone from the Boston Symphony playing an orchestral instrument with the exact tone produced by his or her technique and sensibility.
Is this moral?
Is it legal?
Can a person license his ``sound'' as intellectual property? When we use a musician's sound or a vocalist's voice in this way, are we stealing a person's soul?
The law has some catching up to do, but it seems that in music as in any other endeavor the history is clear: When something new comes along, people use it and it becomes part of the form. The implications for nuclear weapons notwithstanding, this has usually meant hardship, followed by great good.
It was disconcerting when the fiberglass pole took over pole vaulting, but the result is a more exciting and beautiful sport. Cars are a more productive factor of the economy than horses were. There was great resistance to the pianoforte at first, but it is now a universal standard for musical expression.
Will computers render musicians obsolete like wheelwrights? Will pianists have to be retrained like factory workers whose jobs are taken by robots?
The answer emerges with diamond-like clarity: ``Yes and no.'' Drummers have lost thousands of hours of studio work to computers in the last couple of years. Every advance in microtechnol-ogy increases the computer's ability to mimic human players.
But even though it already has become a versatile and powerful musical instrument, the computer isn't ready to thrill us in live performance yet.
We've come a long way since the early electronic music pioneers of ``musique concrete'' would come on stage dressed in white lab coats and, as the last human act of the performance, turn on the tape recorder. Or have we?
Peter Bell and his partner, Peter Johnson, make use of the new technology in their Cambridge, Mass., studio, Musitech, writing and producing original music for television commercials and shows.