The following is a brief interview with the publicity department at Hearts of Space to help them describe the process of creating Seven Veils.
– What are your musical influences for Seven Veils?
Well, usually my work is an amalgam of so many digested influences that I can’t even name them anymore, but on Seven Veils, a few did come to the top: Egyptian oud music, Gnawa ritual trance music (and other Moroccan styles), North Indian Classical (especially Hariprasad Churasia), dub (especially Adrian Sherwood), Arab and Persian vocal music, and lots more.
– Why did you choose a Middle Eastern musical theme?
Because I listen so much to music from that part of the world, it just gets in my blood. It’s not always conscious. Many of the underlying conceptual themes on the album relate distantly to Alchemy, especially its roots in medieval mystical Islam. As I focused in on some of these themes, the Arabic/North African feel began to emerge naturally within the music.
– How does “just intonation” play into the overall concept of Seven Veils?
Only insofar as my compositional interests involve interrelationships between multiple levels in the music. I use tuning systems based on the harmonic overtone series because I like the sound. The process of using these tunings leads me to think in terms of ratios, and those ratios enter into the rhythmic and harmonic/melodic structures as well as the tuning. When I set out to make an album that is dominated by rhythm, as is Seven Veils, these relationships become more concrete and overt.
– Could you give a concise definition and explanation of “just intonation”? (as brief as possible)
Just Intonation is a system of tuning that uses whole numbered ratios between the frequencies in a scale, creating tunings that align to pure harmonic relationships. This system fits the way we naturally hear harmony. Normal westen tuning (equal temperament) only approximates these natural harmonies, for the sake of convenience. I use just intonation because it gives me a much deeper and more expressive tonal vocabulary, even though it’s harder to work with.
– Could you discuss any unique recording processes you went through during the making of this record?
One of the challenges in this project was to incorporate the contributions of the other musicians in a way that allowed them to stay spontaneous, but fit into the unusual musical context and tuning systems. David Torn’s contributions involved the added difficulty of distance (he’s in New York, I’m in California.) Luckily I was able to make a stop in New York while I was on tour in 1996. Unfortunately, I had only just begun working on the album before I had to start rehearsing for the tour. I wasn’t even sure what songs I wanted David to play on. I carried with me on the tour the rhythm submixes for two songs on a DAT tape, and I had a notion of the tuning and modes that I wanted for each. On the first day with David we transfered these rhythms to his DA88, and I showed him the tuning systems that I wanted to use. David is one of the few guitarists who could handle a microtonal guitar part on a standard-fretted guitar, primarily because he plays so much with pitch bends and the whammy bar all the time, and his intonation is amazingly good. He improvised his way through two takes of each song, and burnt a CDR for me with ProTools dubs of the four takes. We did all of this in free-sync, without timecode, relying on the fact that I would edit his parts later. Six months passed before I was ready to use these parts, and after about a week of editing I built several “virtual guitar solos” from his raw improvisations, fitting his gestures into a totally rewritten framework. David had also recorded several “Torn Loops” for me, but some of them didn’t quite fit the timing of the music; so I layered some of the unused sections of his solos using the JamMan, creating two new loops to augment the ones that he had created in New York. I hope he wasn’t too upset with me for completely rearranging his contributions. This type of non-linear editing is a fairly typical process for me, and I’m usually even more ruthless when hacking up my own playing.
Another kind of hacking involves the modifications of sounds themselves. I strive for a seamless merging of the electronics with acoustic instruments, and often the two sources cross into each other’s territory. Some of the most ‘electonic’ sounding noises might be acoustic in origin. (In any case, the differences between hard-disk based recording and sampling have vanished.) The first song “Coils” provides several examples of non-synthetic sound design. During the opening of the song, a long evolving shimmery noise builds up to a crescendo before the faster rhythm. This resulted from processing a binaural recording of shell chimes with an open-loop feedback network created between several effects units. The feedback loop hangs at the edge of chaos, building up layers of spatially synthesized washes of noise. Changing the effects parameters changes the direction of motion in the sound. Another example of acoustic/electronic hybrid sounds occurs at the point in the song where the gliss-guitar comes in (a sound like a human voice, created by bowing the guitar with a piece of metal.) A one-note pulsating bassline enters at that point, sounding like an anaolg synthesizer. This is actually distorted electric guitar, gated and triggered by the MIDI clock, then processed through a filter in the Eventide H3000. Why not just use an analog synth? I think the processed guitar has more complexity and life to it.