Conversation with Jim Aikin, 2001

An Interview with Robert Rich

One of the benefits of working at Keyboard was that I used to get lots of free CDs in the mail. (All things considered, that may be the aspect of the gig that I miss the most.) It’s astonishing to see the diversity of musicians’ styles and obsessions. Not all of what landed on my desk was great stuff, but there was always some great stuff hiding in the pile. I got to cherry-pick the most interesting discs and take them home.

That’s how I ended up with a dozen Robert Rich CDs in my collection.

If you like categorizing things, you could call Rich an ambient or new age artist. His music tends to be slow-moving and spacious. Some of his CDs — Below Zero and Inner Landscapes, for instance — are devoted entirely to almost motionless washes of pure sound — but the sound isn’t necessarily pretty. Parts of Inner Landscapes are almost frightening in their intensity.

The other side of Rich’s oeuvre (CDs like Propagation and Seven Veils) is more rhythmic. I wouldn’t call it “world music,” because the sonorities aren’t derived from any specific non-European culture, but both Elvis and Beethoven have definitely left the building. Nontraditional instruments like reed flute, hand percussion, and sitar play prominent roles, but synthesizers and assorted digital effects are essential to his sound. Stately rhythms percolate along underneath gorgeous clouds of sound. Unusual tunings, both Asian and homegrown, give many of his pieces an exotic air.

When I read the press release on his recent (March 2001) DVD-only release, called Somnium, I figured I could probably talk the boss into letting me do a short feature on it. I was right; the feature was scheduled for our June ’01 issue … and then, due to a bit of inadvertence in the production department, it didn’t run.

As embarrassed as I felt when Robert sent me a tactful “by the way, when did the article run?” email, the snafu actually turned out to be a good thing. His next CD, Bestiary, had just been released, so I got to interview him again. The interview (edited down a bit for publication) eventually appeared in the November 2001 issue. The Q/A session below includes both interviews in a more complete form.

The “news hook” for the original piece was to be that Rich had to overcome some nontrivial technological obstacles to put seven hours of music on a DVD. Pushing the boundaries of technology is what Keyboard is all about, but the real reason I wanted to do the feature was that I wanted an excuse to visit Rich’s Mountain View studio and sit down and talk with him about music. We talked about Somnium and his occasional Sleep Concerts, but the conversation ranged too widely and went on too long for it to fit into my allotted page space, which is why the bulk of it has it ended up on MusicWords.

Like a lot of artists who work outside the mainstream, Rich has to find ways to supplement his music income. He mentioned both doing sound design (he has developed samples and preset libraries for E-mu and Sonic Foundry) and mastering other people’s CDs. If you have to have a day job, mastering is probably close to ideal — his studio, though modest on the outside (it’s built into a converted two-car garage), is equipped with a Mackie Digital 8 Bus mixer, and sports three computer screens side by side, two running Cubase and the third the Mackie control software. I didn’t ask what kind of monitors we were listening to, but they sounded awesome.

Stacked neatly in the corner behind the baby grand piano were several open-holed reed flutes and an assortment of hand drums. But the tracks for Bestiary that he played for me used rhythms generated on a large analog modular synthesizer. His keyboards — a Prophet-5, a Korg Wavestation, an Ensoniq ASR-10, and a DX7II — are all charmingly obsolete, acquired over the years because they had decent support for alternate tunings.

When did you start doing sleep concerts, and what inspired you?

The first sleep concert was in February of 1982. I was a freshman at Stanford. I was interested in the way that music could interact with states of consciousness, to be part of, like, a trance ritual, shamanism, that type of thing. I was not yet a psychology student, I hadn’t yet decided that was going to be my focus in school, but there was this common interest — sleep research, altered states of consciousness, experimental music.

And there was some work of other people that I was very interested in. I was looking a lot at Indonesian music — not formally studying it, but I was interested. And I discovered the Wayang puppet plays in Java, where typically the performance, the enactment of the Ramayana, would go on all night long. These would be eight-, ten-hour-long concerts. I wondered what sort of state of mind the villagers must be in, hanging out in this musical environment going on all night long — children running around, falling asleep, families just part of this whole experience.

Also, I had read about Terry Riley’s all-night concerts in the ’60s. He used to play organ improvisations for six or eight hours. And there was a Fluxus artist who used to fall asleep himself in a hammock with EEGs hooked up to his scalp and whistles in his mouth, generating music from his brain waves while he slept. I thought, that’s kind of interesting but sort of boring at the same time. It would be much more interesting if the audience was asleep and the musician was somehow playing to their brain waves rather than this kind of empty artistic statement. I had also heard a performance by John Cage and Maryanne Amacher sometime around 1981, called “Empty Words,” where Amacher was playing these very neutral drones of extremely low-volume ambience while Cage read randomizations of Thoreau’s Walden.

All of those things kind of clicked in my head, and I realized what I could do was use some of my growing interest in ambient sound — I was making a lot of nature recordings, and I was making these patches with a modular synth that would just kind of run all night long. I would just live in this environment of strange noises. I was discovering that there was a new way of listening with that extremely long duration. In order to introduce that way of listening to people, here was a way to get them to stay in the same place for hours on end without actually expecting to be entertained. Because it would change their expectations if I said, “Bring a sleeping bag. Bring a pillow. Fall asleep. It’s an environment of sound.” That way they’re not expecting something active. By completely removing expectations, I could create a new kind of experience — a sort of ritual.

Sleep Concerts are not the only thing you do, though.

At the time I was playing in bands, doing weird improvised industrial noise. The group I was in at the time sounded like a cross between Throbbing Gristle and Fripp and Eno, I guess. It was pretty strange. And I was beginning to work on my tuning experiments. A lot of my other music is very melodic, rhythmic, active stuff. This was just an ongoing interest in more of a sound environment installation kind of approach. So it’s been an ongoing thing. Ironically, although I was playing in bands at the time, the first album I put out was inspired by the sleep concert stuff. It became the one thing I did that got the most notice, I guess because of the extremity, its unusual qualities.

How many sleep concerts do you tend to do per year?

It starts and stops. From about 1982 to ’86 I did fifteen or twenty. Then I stopped doing them because I came down with mono, actually, and I just couldn’t pull all-nighters anymore. These things were exhausting. I would be awake for 40 hours straight, setting up the equipment, being alert and performing well for eight or nine hours straight, and then tearing down the equipment and going home. That can wipe a person out. So I just ran out of energy. I didn’t do it again until around 1995, when a music director at a radio station in Irvine kept pestering me, as many other people had, asking if I was going to do them again, because it was always unusual and interesting to people. For some reason — I guess he was persistent enough — I said, well, I don’t think I want to do them live with an audience anymore, but what about on the radio? He said, yeah, we can arrange that.

So he set aside a chunk of time on KUCI and I did an all-night sleep concert on the radio. There were also about 15 students who camped out in the lobby. And that felt good, for some reason. It worked, in a different way. So in 1996 I went on the road and did about 30 concerts in three months. So overall I’ve probably done maybe 40 or 50.

In a radio concert I imagine you’d be able to put on a pre-recorded section and take a break. How much of a live performance is actually performed, and how much is pre-recorded?

Because the nature of the sound was often ambient sound…. It was often recordings of places on tape. Typically, underneath these ambient recordings would be loops set up on long delays, at the time the Electro-Harmonix 16-second delay, I would play into it and loop it, or do a drone on the Prophet-5 or a modular synth. There were a lot of things that were sort of created in slow motion. In the ’90s when I was doing them it would be bringing in loops on a sampler, holding keys down. It’s not active music, it’s texture. Extremely, extremely slow. So yeah, if I have to go to the bathroom I can set some of these processes going and take a break.

But you are actively involved in the process. Do you know in advance what the structure of the night is going to be, or is the whole thing improvised?

There’s a map, but the map can always be shaped to the moment. There’s an interesting thing that happens when a whole bunch of people sleep together. They tend to cue in to each other’s activity levels. There will be moments in the night where it’s just deep sleep. The energy that’s in the room, I’ll have to dive down with it and be very quiet. Other times there’s a sense of agitation or motion, and I can bring the level up a little bit.

Are you planning to do any more Sleep Concerts in the near future, or was the DVD more or less a one-off project?

The DVD might be an attempt to cut the whole thing off at the pass and say, “Okay, there! It’s done.” [Laughs.] “If you want a sleep concert, just put this on.” No, it does come back to haunt me from time to time. People are still interested in the group experience, because it is very different from the private experience. I probably will do some more. I’ve been talking with SFMOMA [the San Francisco Museum Of Modern Art], actually, about possibly setting up an installation.

Let’s talk about the DVD. On your website you discuss a number of technical challenges that you had to overcome in the course of creating it. What was the most frustrating thing for you about the project?

I think the fact that none of the software designers had ever thought that somebody would want a continuous soundfile that was four and a half gigabytes long. Even the newer operating systems, both Windows and the Mac, have limitations [that require] a single file [to be] less than two gigabytes. Even though the Mac’s HFS Plus supports up to, I don’t know, a terabyte or two of volume, the file size limitations are still built into the software.

I found a bug in Pro Tools that it would not create a bounce longer than two gigabytes. So I’m thinking, what if I want to do a film that’s longer than two hours? Hasn’t anybody done that? Sometimes it’s flabbergasting to me that people haven’t tried these things. It reminded me of, back in the early ’90s…. I tend to find bugs in software, when I’m trying to push a limit. I was helping a friend, Carter Scholz, do a piece using Deck, and we were trying to do a five-minute fade. It crashed the program. I called up the company and talked to a guy who became a friend of mine later, Tom Dimuzio. He said, “Oh, I guess nobody’s tried that before.” Well, last year when I was trying to bounce the final mix files in Pro Tools, who’s working at Digidesign but Tom Dimuzio? He’s going, “You know, you’re always doing this kind of stuff! What is it about your stuff that’s always breaking the software?” [Laughs.]

So I couldn’t actually send the authoring house a continuous file. Somnium is one piece of music; it doesn’t actually stop. There’s no silence. And so I had to create crossfade points and assemble the final master as three different files. And then … I had set the duration so that it would fit on a DVD. I decided on the maximum length I could fit, 4.7GB at 48kHz. 48 is the sampling rate of a DVD; the entire project was done at 48. Well, it turns out there’s all this file overhead stuff. Not only is there navigation software that’s required on the DVD, but DVD video also requires a video frame to correlate with every audio frame.

So it was looking like even with the most compressed video that they could come up with, which was like 8 x 8 pixels with a compression routine that was so great that basically the pixels filled out the screen, the video still took up about half a gigabyte. That cut the audio down to about 4.3GB. So I actually had to AC3-compress a small section of the audio. I didn’t like the sound of the AC3 compression, so I had to pick a region to compress that wouldn’t be so damaged by it. The beginning and the end really didn’t sound good.

What is it that you didn’t like about AC3 compression?

Two things in particular. One is that it mushes up the ambient soundstage. This music is almost entirely space. There’s almost no foreground; it’s almost all background. The soundstage, the inter-channel phase difference, the openness of the reverbs, the sense of depth, gets congested. It gets clotted. What digital compression does is, it looks for what it thinks are the most important parts of the audio signal, and it subtracts the rest. Well, in this case, the audio consists almost entirely of what it would typically be subtracting. So it’s pretty much the worst-case scenario for AC3 compression, as it is for MPEG compression.

The other thing I didn’t like was there was something getting weird about the high frequencies. Certain very stable high-frequency information was turning into swishy sounds. What was “ffff” became kind of “shwk-shwk.” Swirling sub-noises.

Are you planning to release the CD version containing the excerpts?

The CD is just so it can be played on radio, and so reviewers who don’t have DVDs can hear it.

The DVD is the commercially released disc.

The music doesn’t really suit another concept. It’s not interesting as short pieces. What’s interesting about it is the duration and the space it creates.

What equipment did you use to create the music?

A lot of it comes not even from electronics — electronics at one level, but not synthesizers. Different feedback networks processing acoustic sources. There are sections where I created a network — I had a different mixer at the time, a Soundcraft, and there were a lot of reverbs and delays going into the mixer and matrixing with each other. And then you take a bit of a sound, it doesn’t really matter what sound, just a seed sound, it could be a drone off of a synth, it could be a bit of a loop on a delay or something like that, and essentially all of the devices start processing each other. After a while it just becomes this entropic decay of self-processed noise. And because some of the algorithms in the feedback network are spatial algorithms, like reverbs, it becomes a spatially synthesized cloud of unpitched information. Using EQ, you can inject pitch by adding resonant peaks to the cloud. So with the long loops, you can start creating overtones of resonances.

Is this something you do live at the mixer?

Right. Actually what I’ll be doing is creating these networks, running DAT for two hours and shaping these clouds of self-process noise, and then taking excerpts from that that might work, maybe a 20-minute excerpt might be part of the [final] recording. The seven hours of material are assembled from these kinds of raw materials. Other sources — one of the more interesting ones that I mention on the website is some electric fish.

Did you have to pay the fish union scale?

I’m not sure they’re union. I could probably get in trouble for that. There’s a species of fish that lives in the Amazon. It’s a fresh-water fish, a drab gray fish about two inches long. But it has sensors on its skin for electric fields. And it also has emitters. Part of its communications system is with oscillating frequencies. It sets up pitches of oscillating electric fields around it. They only propagate a few feet, because it’s in fresh water. The electric signal doesn’t spread the way it would in salt water.

Each fish has its own characteristic frequency, which will range from about 300Hz to 600Hz or something. It’s kind of a sawtooth wave, it’s a grainy sound. They move around in the river, and when two get close to each other, they sense each other’s frequency, and they use that as a territorial marker. If they have the same frequency, one of them will eventually shift to a different frequency. It’s almost like a little game they play.

It’s like birdsong, although probably not as interesting to listen to.

Right. Or … there’s a Cornelius Cardew piece called “The Great Learning,” where singers are walking around a room doing almost the same thing, except the opposite. They’re actually trying to repeat pitches that they hear. I’ve been in a performance of that. It’s a fantastic piece, because it uses the fact that with untrained musicians they will be out of tune. Or when they’ll be trying to match of somebody next to them they’ll accidentally sing a fifth. It uses that to create beautiful harmonies, or dissonances.

The fish is kind of like that, except that it’s trying to send a different pitch. So I was down at Scripps [Institute of Oceanography] visiting my cousin, who’s getting a Ph.D. down there, and my cousin introduced me to a friend of hers who is doing research on this fish, trying to understand the electrical sensing mechanism. I thought, “Hmm. This might sound interesting.” So we took about a dozen or 16 of these fish and put them in a big ten-gallon bucket with differential electrodes, plus, minus, and ground. We put the electrodes down into the bucket and connected them to a polygraph amplifier, and plugged it into my portable DAT machine, so it was direct electrical current. It never even saw the air: We just recorded directly from the fish to the DAT.

The sound was very buzzy and midrangey, so I processed it. I did some speed manipulations to it, and some filtering to take off some of the high frequencies. But that creates about a 40-minute section of the piece.

I noticed that in the first piece on the review CD there were some unusual tunings.

A nice 7 over 4 interval.

I thought I was hearing an 11 at some point.

You were. Good ears! This flute. [Grabs an open-holed wood flute from the corner of the studio.] Seven over four is actually this interval. [Plays.] There are a bunch of sub-harmonic intervals based on 11 and 13. It’s an aliquot undertone scale. And there’s a 7 over 4 in there as well.

Do you record a flute track dry and then use it as source material, or do you play into the effects in real time and record the output?

The sections with the tuned material, usually I’ll be either creating long delays with the flutes and then half-speeding them, processing them that way, in non-realtime, or actually tuning samplers and synthesizers to the flute. I’ve got the software I wrote a long time ago called JI Calc, which is on the Mac. It’s a HyperCard stack that I wrote back in 1987.

And you still have a Macintosh SE to run it on?

No, actually it’ll run on the newer systems, still. It’s astonishing. It still works. Carter Scholz helped me rewrite the program in the early ’90s to put a whole ton of X-commands in it, so it’s basically all running in C now. It’s pretty fast. It can download MIDI files to synths. So I can sit there and work on the ratios of the tunings and listen to them. If it’s a flute or something, I can actually reverse-engineer the tuning of the flute, and tune the synthesizers to it.

Do you use the tuning table in the Korg Wavestation?

All of them. You’ll notice all of the synths I have here are tunable.

Do they all have just 12-note tuning tables, or are any of them full-range?

The DX7 and the ASR-10 and the Proteus modules all have full-keyboard tunings.

Which Proteus modules do you have?

There’s a prototype World module there, because I did a lot of presets for the World module. A lot of the Proteus 3 samples are mine. In payment I got the prototype.

The Proteus 2000 has 12 full-range tuning tables.

Do they still support the MIDI tuning dump? Because when I was doing sound design for them with the Morpheus, I talked them into supporting the new tuning dump.

I don’t know offhand, but why would they throw it out? They’re still supporting 19-tone equal temperament, which nobody uses. It’s a wonderful tuning if you like equal temperament.

I don’t. [Laughs.] I prefer JI. I just love overtone series tunings. I love 7th harmonic intervals, 11’s sometimes. In the context of a harmonic ladder, 11’s are beautiful. 11’s aren’t so great in difference-ratio tuning, with more complex intervals. But if you’re in all-harmonic intervals, where every denominator is a power of 2, then generally the 11’s are very nice.

Does it ever bother you that the resolution of the tuning tables on some of these instruments won’t let you get quite to the tuning that you want?

I’ll be honest. I’m really a pragmatist when it comes to this stuff. Yeah, as a purist it does bother me. Especially when I’m trying — there are certain effects that I like to get with overtone series tunings, where I have a harmonic scale of zero through 60. The 61 notes of the keyboard will just be the entire harmonic series up through 60. And on those, if you’re trying to get certain effects with difference tones and clusters up in the upper harmonics, then the lack of tuning resolution really makes it fall apart.

* * * * *

Where did the inspiration for Bestiary come from?

It came from several different places. One is a new piece of gear in the studio, the MOTM [Synthesis Technology] modular. I grew up with modulars, basically. That was my first instrument. PAiA home-built things, a bunch of Curtis chips and stuff like that. So it was really a reawakening to the stuff that got me into electronic music in the first place, which I had sort of forgotten about, really, over the intervening ten or 15 years of MIDI.

The part of that that was awoken in me was a love for pure sound and very kind of surrealist soundscapes. That connects in with a part of my imagination that generally aligns pretty strongly with the paintings of Yves Tanguy or Joan Miro, that sort of surrealistic landscape of bizarre but sharply delineated characters, like lifeforms that aren’t quite nameable. I really wanted to pursue a sound … kind of find an equivalent for that imaginary realm, but in sound. In part it was kind of an answer to a quote of André Breton’s, back in the ’20s. He basically didn’t like music, and I forget the exact quote, in fact I’ve been trying to track down this quote for about a year now, ever since I decided to answer it, but it’s essentially along the lines that music could never express a surrealist idea because music was by its nature abstract but based on rules, and that surrealism needed to be concrete, but based on no rules. That’s a paraphrase of what he said.

And I thought, “Well, that’s ridiculous. He obviously didn’t — he must not have even been aware of the work of [futurist composer Luigi] Russolo, who was a contemporary of his. Or people who came along just five or ten years later. So it was kind of a joking answer to that, even though obviously nobody needed to answer that quote anymore, because it was many years ago and irrelevant anyway. But we all need something to start with.

It sounds as if the visual component is a really strong part of the music.

Yeah. In part, each sound on the album is something that I wanted to make me laugh. Some of the work I’ve done in the last few years is very abstracted and blurry. Kind of large, big strokes of a brush. I wanted to do something that had real shape to it.

There’s more detail, in some sense.

Yeah. Even though it’s unnameable detail. You can’t quite … none of the sounds are recognizable, but they’re all definitely independent.

Let’s talk about how you produced the album. When did you first become aware of the MOTM system? Is this something you heard somebody else using?

The head of the company introduced himself to me around the middle of last summer, and wanted to get me acquainted with his synth, because I guess he’s a fan of my music. It turned out I ended up doing a review of it for a competing magazine [laughs]. During the process of doing that review, I became enamored of it.

I think you said you have some custom modules.

I’ve become friends with the guy who builds the synths, a guy named Paul Schreiber. So I’ve been sharing with him some of my opinions. Not on Bestiary, but right now I’ve got in there a prototype of an MOTM-compatible Hellfire Modulator from Metasonix. And I’ve got a modified Wiard Waveform City in there, from the Wiard Modular.

Could you describe the process of how you recorded the sounds on Bestiary?

Actually, the sounds started out with the intention of being on an Acid Loops library. I did a loop library for Sonic Foundry two years ago called Liquid Planet. That’s mostly my more organic sounds. Some of the more ethno-music stuff that I used to do in the ’90s.

They had expressed interest in doing a follow-up loop library, and I had suggested maybe some abstract electronic sounds. They thought that was a great idea, so I started working on these MOTM-based, really bizarre noises. And going back and forth with them with these new sounds, they really didn’t like them very much. They kept trying to make me do something that was more like techno/dub, and I’m like, “Everybody’s doing that! Who needs that?” They just didn’t think anybody would be able to use the noises I was making, which were just so weird. And I thought, “You know, I could go around and around with this, and try to give them something they want.” But I was starting to get inspired by all this stuff, and I thought, “Why give this stuff away? This is really my voice here.” So that became the basis of the album.

So the modular was used as the source on a lot of the tracks. Was there a typical setup that you used? Starting things and letting them run and recording them into Cubase? How did you go through that whole process?

Different ways. Sometimes I would be patching away, and something inspiring would happen, and … I was trying to do the album entirely in 24-bit resolution, so I would just, I have a Tascam DA-78, so I would just roll tape, and then take excerpts from those tapes and edit them out into Cubase, using a Mackie D8B for conversion, bouncing back and forth between the tape and Cubase VST 5.0.

Is there any particular reason why you recorded to tape rather than going directly into Cubase?

The convenience of being able to run, like, an hour’s worth of noodling around and then rewind it and keep a little bit of it. But have it all backed up. It’s a little bit more reliable. And it’s less maintenance than keeping track of all the files on a hard disk. It backs itself up; it’s its own backup medium, really. And the eight tracks of DA-78 tape could hold four hours worth of stereo experiments. That’s more than I could hold on all the hard disks, once I came up with an album’s worth of stuff.

And also, it allowed me to record very quickly, without having to think about it, multiple tracks and then do a submix down to the computer. And still have the dry tracks all backed up. It’s just an archiving convenience, really.

So did you mostly record the modular dry, rather than running it through effects?

A lot of the time I was recording it wet. Just getting the sound I wanted right away. It’s a bit like a guitarist working with effects pedals or something. You’re getting your sound right there, and you’re working with the sound, which is partially effects.

What kind of effects were you using?

Mostly Eventide H3000, and Sony R7 reverb and D7 echo, TC Electronics M3000, and various old retro DDLs and flangers and stuff. I still have my old Delta Labs 1024, which is still one of the best flangers.

Typically did you transfer long segments into Cubase? The album is distinct from a lot of electronic music in that it doesn’t sound looped. It sounds as if something is going on for a longer period of time.

No loops were damaged during the making of this album [laughs]. There are no loops, it’s all performances. And often, like, eight or ten minutes long, which then get woven in and out. I really wanted the feeling of a living, breathing continuum. Something expressive, in the moment. I really get sick of the loop-based, sample-based, static qualities of a lot of the stuff these days.

When you were recording one track of synthesizer that ended up on the CD and then you were recording something else, were you more or less aware that the two might go together, that they were going to be in the same key or something? Or was this all just a random process of recording stuff and then seeing what worked?

A bit of both. Some of the sounds were just complete experiments — set up a lot of chaotic weirdness, submodulating feedback patches and stuff like that — and just recording the noises and then finding where they fit. Then other parts were definitely frameworks based on microtunings and rhythmic motifs.

Most of the tunings on the album are harmonic-based, either 12 through 24 harmonics or harmonics 1 through 60, microtonal. Those things are definitely built into a harmonic framework. Primarily the second track on the album and the final track on the album — which are the only two that are really tuned; a lot of them are very weird, random tunings. But those two tracks are based on harmonics 12 through 24, a 12-note-per-octave tuning.

The lead lines in the first and second tracks on the album are actually clouds of oscillators, like eight sine-wave oscillators within about a semitone of each other making this blurry cloud that creates a choral-like effect. And some of the melody lines I’m playing with that are actually in the upper registers of the 1 through 60 harmonic tuning, so they’re extremely microtonal. It creates these nice blurry regions.

How many tracks did you typically end up with in Cubase, after you cut and pasted all the stuff you were going to use?

About 20 tracks. But a lot that is used in overlapping, because the entire album is assembled is one long piece.

There were no breaks between the pieces. Were you able to do it all in one long Cubase song file?

Yeah, it’s a single 53-minute-long session.

Did you think of them as distinct pieces that overlap?

Not really. The piece boundaries are pretty arbitrary. There are definite movements. I really think of the whole album as one piece with many different movements. It’s quite arbitrary at points [where one movement ends and another begins], but other places there’s a definite rhythm that comes in, and it’s obvious when it happens.

I like a certain seamlessness in my music. I like to hide the corners a little bit. If I were the sculptor Rodin, I would be in the early phase rather than the later phase, preferring the skin texture to be more glossy and smooth, rather than the chisel point still visible. So there is a kind of continuum about everything, and I’m always trying to make it seem like everything is coming organically out of the previous thing that happened. That takes a long time; it’s just trial and error sometimes.

Sometimes sliding the new track forward or backward so that it seems to emerge at the right moment rhythmically?

Exactly. And a lot of times, when I have this library of sounds that I had made while experimenting with the modular, I would look for certain similarities in tone that could melt one into the other. There’s definitely a search for tone colors and patterns that would allow me to crossfade between sounds where you couldn’t hear the crossfade.

Did you do a lot of processing once you got the sounds into Cubase?

Yeah, and there’s a lot of non-modular stuff going on too. You can hear that there’s a fair number of organic sounds. There are a lot of mangled things that are done with DSP — time-based processing.

There’s a vocal in the last track that’s extremely slowed down.

Yes, that’s been time-stretched in SoundHack. And then some phrases flipped backwards, and then a lot of pitch quantizing through the Eventide.

Was that your own singing?

Yeah. [Laughs.] Things like that. And then, I also made a lot of use of a shareware program called Argeiphontes Lyre, which was made by a guy … I think he was at Cal Arts when he did that program. The creator of that is a fellow who goes by the nom de plume Akira Rabelais. I’ve met the guy; he’s an interesting fellow. And it’s one of the most beautiful interfaces of any piece of software I’ve ever seen. Very cool. But I made a lot of use of a shuffling granular synthesis program in there that basically takes, it goes alphabetically through every audio file that you have inside of a folder that you point it to, and it takes a grain serially from each file and loops around, so it just mangles things. I’ve made a lot of use of that in other pieces. There’s a spoken word piece that I did on a compilation called Winged Ants. That should be on my mp3 page, I think.

It makes wonderful chattering effects. For example, on the title track there’s some very strange blurry choppy noises that sound like they could have come from a Bebe Barron soundtrack from the 1950s. What that is is feedback on a piece of sheet metal, which is then … I did several different performances of this piece of sheet metal using a Sustainiac Model B, which is a guitar device that is a mechanical feedback generator. Then I took these recordings of feedback on sheet metal and placed them through the mangling of Argeiphontes Lyre and created that wonderful random choppy splattering sound. But there’s a lot of non-electronic, basically acoustic source DSP going on.

What’s this Sustainiac Model B?

It’s the early version — or maybe it’s the later version, I’m not sure. This guy in Indiana, his company is called Maniac Music. It’s basically, if you’re familiar with the Fernandez sustainer guitar, they licensed that from him and then proceeded to steal it from him and not pay him. He was the first person to commercially design a kind of six-channel Ebow built into a pickup. What he did, then, is … that’s the magnetic version, but I had been searching for years for what I had heard there was a mechanical version. What it actually does is it takes the output from the guitar, puts it into a 50-watt power amp that’s on a footpedal, and drives a magnetic coil with that, which you can then attach to a fixed magnet, a very strong magnet that you screw onto the surface of your guitar neck. You have to mangle your guitar. You have to cut into the headstock, and people don’t like that. But it doesn’t bother me, because I use mostly crummy old lap steel junker guitars. And it’s really interesting on a lap steel. It makes big, ringing feedback noises. It sounds like you’re standing in front of a giant Marshall stack. But the cool thing about it is that it works on non-guitars as well. I have yet to mount it on my piano. I really don’t want to drill into that. I got this piece of very thin spring steel sheet metal, about four feet by two feet, and suspended it from a wire with padding on it, so it didn’t vibrate. Mounted the magnet onto a peak instead of a node on the vibrational pattern of the steel. And then basically created a feedback loop with a piezo transducer. Attached the piezo to it, ran it through the Sustainiac, mounted the coil directly onto the steel. And it becomes like a musical saw without the bow. You can bend it, and you’re getting these weird sort of “whoo-arr-hiy-awwugh” weird, weird noises. But a little bit like a plate reverb at the same time. It’s kind of tunable, but not really.

What’s fascinating to me is that you’re using advanced technology and yet coming out with a very organic sound. Do you have more plans along these lines? Is this something you want to explore further?

Oh, yeah. I try not to repeat myself much, so I might go in a few other different directions. But right now I’m starting work on a collaboration with a synth player in England named Ian Boddy. We’re experimenting a lot with new kinds of distortion. He’s been doing a lot of [U&I Software] Metasynth work lately. And playing around a bit with the Hellfire modulator, trying to see if I can come up with some interesting difference tone kinds of distortion. Mostly, there’s a certain Holy Grail of a kind of sound that I’m always seeking, which I’m not even sure how to describe, and I’m not sure if it can be done as a process to sounds. It might be actually within a model of certain sounds, and that is the kind of, I want to call it “gruzz,” but it’s a modulation distortion rather than a clipping distortion. It’s a little bit hard to put this into print, but if you can imagine the sound of a blues singer when they let their throat go into modulation, that kind of “ghaaah” sound, you can do that, for example, feedback on sheet metal can get into these interesting complex standing waves where it breaks into different levels of chaos, and the waveforms are starting to cross-modulate against each other as they propagate across the sheet. It creates all sorts of interesting clouds, these modulation clouds.

Other physical models are like that. For example, when a flute or a tube gets overblown, or like a saxophone player, the guy from Chicago Art Ensemble [Joseph Jarman] is brilliant at creating these multiphonics. That kind of sound in the saxophone where I guess you’re humming through it. Those types of sounds are sort of a Holy Grail for me. I want to find ways of applying that sort of distortion to a sample, to a found sound.

Is Release a new label that you’re working with for the first time?

No, actually they released my two Amoeba albums. Release is a subsidiary of Relapse, which is primarily a death metal grindcore label. I’m label-mates with Neurosis and Amorphis. It’s pretty funny.

I guess compared to them, you’re like new age.

Totally. Let’s not use that term. [Laughs.] I’ve always had this problem of falling between categories. Whatever the dominant paradigm is, somebody’s going to want to lump me into that if it seems to fit closely enough. I don’t know. I just do what I do.

Thanks to Keyboard for permission to repurpose this interview. For more on Robert Rich, please visit his web site.

Except where noted, all contents of are (c) 2004 Jim Aikin.
All rights reserved, including reprint and electronic distribution rights.

2014 JamSession © All rights reserved.