Wednesday, March 13, 2013

Noise and meaning

In my mind sound mass music and noise music are similar. I'm not sure if that simply goes without saying or if scholars have already teased them apart in some way, but in both cases there is a kind of semiotic disruption; what we thought we knew about musical meaning (I mean we in the most general sense) is distorted to the point that it doesn't make sense as music. I'm not arguing that it isn't music, but Ligeti's Atmosphères doesn't make sense in the same way as a Schubert song. I'm also not arguing that music carries intrinsic meaning, and I accept that musical meaning is, or could be, the result of cultural conditioning. So, maybe one day someone will hear the Ligeti the way we here the Schubert. I think that will be the case, anyway.

I like music with some noise. I think I like it because the meaning is ambiguous; I have greater freedom to interpret it. I also like composing with noise partly because it removes the burden of dealing with universally understood meaning. A film score composer, for example, must be able to convey fairly specific senses or moods with music. I don't have to worry about it in a more abstract setting (i.e. the concert hall) because the listener has more freedom to interpret. I don't think that means noise music, or noise-in-music, is a cop out for the composer. Anyway, it makes sense for me given my formative musical experiences with rock music, with its distorted guitars, scream-singing, and drums. It's a legitimate musical impulse, and can be treated with skill in the same way Schubert, for example, treated melodies and harmonies.

I tend to think of sound mass music in two big styles: Penderecki's static blocks of microtonal clusters and Ligeti's (and Xenakis's) hyper-active surface counterpoint. I recognize there are more than two ways to skin a triad, but when I compose I think in terms of these two polarities. It occurred to me today, however, that I am beginning to develop my own approach. Basically, I layer semiotic music so densely that it can't be heard as semiotic. When I say semiotic I mean music that conveys some universally understood characteristic. (Is that vague enough?) For example if a person hears "I Wonder as I Wander" he or she will have some perceptual response based on previous experiences. Even if the person doesn't know the song, or lives in a non-Western musical culture, it will at least make sense as a melody. If a person hears Schubert's Der Wanderer, there will be a similar response.

In my piece The Wanderer for wind ensemble, I used both of these melodies to create a sound mass near the end of the piece. I layered "I Wonder as I Wander" five or six times in close imitation and transpositions. On top of that (and a lot of other stuff) I added motives from the Schubert. The result was music so dense that it prevented the perception of melodic and harmonic sense. This may not be noise in the strictly acoustic domain, but it is very much noise in the semiotic domain. (For me, it's really a combination of the two.) I think it's important for me to embed more comprehensible music in my noise music (or sound masses), even if they won't be heard as such, because that's what the modern world seems like to me. The metaphorical noise that we deal with on a daily basis (i.e. stress or anxiety) is not abstract or meaningless. Every fragment, every insignificant component part of the stress of modernity is a perfectly comprehensible thing. It is the sheer density of these component parts that makes it incomprehensible.

This is the backdrop against which I need to think about this dissertation. No one will describe this piece with the word "clarity." I've really been struggling as I write the piece and it becomes more and more real because I've been grasping, unsuccessfully, at clarity in the traditional semiotic sense (i.e. "Will this be perceived/easily understood as music?"). However, I don't want to let that struggle for clarity undermine the noise element.

Sunday, March 3, 2013

Shepard tones

I've been struggling for a couple of weeks to decide how to end of this piece. I finally took a "just compose" course and started writing out notes. Soon I realized I was doing something like Shepard tones, so I thought, "Maybe I'll do this for awhile and then end it." Then I went through everything I had written up to this point and timed it. The passage shown below corresponds approximately to the jagged, angular section in the sopranos, beginning around 13:30. It ends about 15:30, but is just beginning to pick up some momentum at that point (haven't filled in everything yet in this picture). I began to realize that this could be the way I build to the end. It's a simple way to gradually layer repeated figures to thicken the texture. The timing is almost perfect. But the best thing about the Shepard tone* is its representation of circularity. Considering the selection of the text with its circular form, and that it will become intelligible as language during this passage, I think this is a bit of serendipity to have happened on the Shepherd tone.


*) Of course this isn't a real Shepard tone, but it is a stylized approximation similar to the one found in Ligeti's etude "The Devil's Staircase."

Wednesday, February 27, 2013

Counterpoint and The Void

"...[A] void exists between musical acoustics and music properly speaking, that it is necessary to fill this void with a science describing sounds, joined to an art of hearing them, and that this hybrid discipline clearly grounds our musical efforts." 
-Pierre Schaeffer

Phenomenology seeks to be this hybrid discipline to fill the void between objectivity (musical acoustics, for example) and subjectivity (music properly speaking). A good musical example of this notion is the difference between frequency and pitch. Frequency is the number of sound waves in a given time; pitch is the way we perceive the frequency. 440 hertz is a frequency; A is the the pitch. Schaeffer's void is the almost ungraspable space between the objective known and the perceived.

It's the space-between that reminds me of counterpoint. Maybe counterpoint is a reflection of the Void, or negative space more generally, and it's the ungraspability of counterpoint that makes it so interesting. Some would argue that it's completely graspable. Indeed, it can be described in great detail. Schenkarian graphs are one example of this description. Species counterpoint captures the essence of a style in order to teach students to emulate that style. But, those examples are akin to the "musical acoustics" of Schaeffer's quotation above. What is less clear is the listener's perception of counterpoint. It's nebulous.




Monday, February 25, 2013

Dramatic Counterpoint, Part 1

"Dramatic Counterpoint" is a term used by Paul Lawley in discussing the texture of Beckett's Play. Play is a play with three characters delivering overlapping monologues on the same story. Not really overlapping--maybe interwoven monologues. Anyway, they don't converse with one another, they just take turns telling bits of their individual perspectives. If one were to parse out the three monologues, three cohesive narratives would emerge, but it's this counterpoint between the three that make it interesting. You can't look at three sides of a statue at the same time, but with Play you get close to simultaneous differing viewpoints--a kind of theatrical cubism. (Another time I might be inclined to think more about the difference in simultaneity versus juxtaposed--maybe they're analogs with harmony and melody. Counterpoint being the space between, of course.)

Anyway, here's a film version of Play from YouTube:






Wednesday, February 20, 2013

A bit of score


Some of the text is beginning to take shape here. If you read IPA you can see the words dog, come, then, the, tomb, and stole. It's still nonsensical--or rather, the sense is indeterminate.

Thursday, February 14, 2013

Introductions, part 2: Other Fibonacci applications and accent canons

In the previous post I mentioned that the form of the introduction was based on two Fibonacci series: one a series of thirteen gestures with durations of (in terms of sixteenth notes) 377, 233, 144, 89, ... 2, 1, and the other a series of twelve gestures with durations of 1, 1, 2, 3, ... 144. The first series ends with only one one-sixteenth-note gesture, and what would be the thirteenth gesture of the second series (233) gradually morphs into a new formal section.

I used the Fibonacci series in a few other ways in the piano/percussion/electric guitar texture. As I've described in the previous post, each of the thirteen gestures in this passage is a descent to the bottom of the keyboard. As the gestures get shorter it seemed unreasonable to cover the same interval of 21 semitones from C3 to E-flat1 in each gesture, so I decided to begin gradually lower each time. Here's a table that shows the initial central pitch of each of the thirteen gestures.



Except for the first and last gestures, there is a change in transposition level every two gestures. These changes follow the Fibonacci series (technically, the negafibonacci series). I chose to use the Fibonacci series here because it gives a nice curve to the entire section that mirrors somewhat the descending curve found in each gesture. I did compose the piano's final A0 of this passage outside of the system detailed in the previous post in order to make sure the goal of the lowest possible note was achieved.

I honestly can't remember if this was intentional (it has been a few days), but I find it interesting that the total number of gestures in this passage, 13, is a Fibonacci number, as is the number of semitones between C3 and E-flat1, 21. Not sure if it means anything...

Finally, I used Fibonacci numbers in a way that undermines the recursive nature of the series. I created a row of 81 (unless I miscounted..) Fibonacci numbers based on various simple additive patterns. I used this row to determine where to place accents in the piano, percussion, and electric guitar parts. I broke up the series using additive patterns in order to ensure asymmetry.

The row:

13, 8, 5, 3, 2, 1, 1, 8, 5, 3, 2, 1, 1, 5, 3, 2, 1, 1, 1, 2, 1, 3, 2, 1, 5, 3, 2, 1, 8, 5, 3, 2, 1, 13, 8, 5, 3, 2, 1, 2, 3, 2, 5, 3, 2, 8, 5, 3, 2, 13, 8, 5, 3, 2, 3, 5, 3, 8, 5, 3, 13, 8, 5, 3, 5, 8, 5, 13, 8, 5, 8, 13, 8, 13

Here is the row broken up in sections to show the various patterns.


Each part follows this row, either forward or backward depending on the part, and when it gets to the end it changes direction and traverses it again in the opposite direction. The piano begins at the beginning, the marimba at the end (retrograde), the xylophone begins on the "2" (12th line, lone 2) and moves forward, and the guitar begins on the "1" just before (end of the 11th line) and moves in retrograde. These numbers are time points on a grid of sixteenth notes. Accents fall on the first note, then again after x sixteenth notes according to the row. In the end this is just a quick and effective way to ensure some variety and generate patterns in the monotonous sixteenth-note grid. I realize there is some goal-oriented motion embedded here, but this is a canon, of sorts, played by four different instruments (playing in unison or octaves, remember). The interplay among the accents on these instruments will create brief, passing motives, not the sense of direction toward a goal.



Introductions, part 1

Well, there's just one introduction. It's the most unified section of the piece. It will actually have a standard score with a time signature, bar lines, and synchronized parts. I knew the kind of texture I wanted; it came out of an improvisation with Impulse back before the holidays. The primary gesture is based on the physicality of playing the piano in the low register, thumbs together, alternating hands playing "random" notes within a generally fixed range in a fast, regular pattern. The pitches aren't important except in that they should not overly emphasize any particular pitch. Of course, as we know from the history of serial music, it requires some kind of non-intuitive system to make an even distribution of pitches sound just right. In my improvisation a couple months ago I felt like I was getting the right texture intuitively, but when it comes to making decisions about pitches to go down on paper I felt I needed to go to the computer to generate a texture closer to my improvisation. Besides, while my improvisation felt right, without a recording I can't be objective enough about it (not to mention I can't transcribe what I played). Intuition is a dangerous place to spend too much time :)

I went to Max because it's very flexible. I built a very rudimentary patch that outputs MIDI information directly to Sibelius. Here's a picture:


The toggle in the upper left turns on the patch. The metro object bangs the toggle below resulting in alternating 0s and 1s. The 0s go to the "left hand" side of the patch, which generates the left hand notes, and the 1s go to the "right hand" side. Both sides are essentially parallel, generating numbers between 48 (the MIDI number for C3) and 48+6 (or 54, F-sharp3). The left hand side is then lowered 7 semitones, producing the range of F2 to B2. The result is alternating left hand and right hand notes, each hand covering the range of a tritone, which fits very comfortably under the hand.

Once I decided the length of the gestures, I added the objects on the right side of the patch to add a curve to the gesture (more on the length of the gestures below). The center pitch above is MIDI note 48 (C3), or 21 semitones above E-flat1, a tritone above the lowest note on the keyboard (i.e. as low as possible without the left hand running off the keyboard). During my original improvisation I moved gradually to the bottom of the keyboard, and I wanted to recreate that gesture here. I tried to descend by semitone every measure for 21 measures, but found the descent was too regular for my liking. By connecting the itable object to the transposition factor (see figure above), I could control the rate of descent. I simply drew the curve that I wanted with my mouse (of course, I had to set the parameter of the itable first--in the example above I knew I needed 377 notes, so the x-axis was set to 377). The transposition factor adjusts the center pitch, which is 48 by default, thereby lowering all the pitches proportionally. When the curve reaches the bottom of the itable, the transposition is 21 semitones down, for the bottom of the keyboard.

The form of the introduction

The introduction is around 2:20 in length, but it gradually dissipates into the main body of the piece making the ending of this section ambiguous. It represents no more than 10% of the entire work, and probably a little less. I first thought of it as a stand-alone, unrelated section, but now I think of it as crucial to the development of the three component pieces: In the beginning the three are integrated into one gesture, but during the course of this introduction, they begin to foreshadow their distinctive behaviors and come apart from one another. If the idea for the entire work is three separate pieces, the introduction tells the story of how they became separate.

The first 30 seconds or so is an extended reproduction of the improvisatory piano gesture I described above. Percussion and the electric guitar join in unison or octaves, dynamically coloring the piano's timbre. This is notated by 377 sixteenth notes. After one sixteenth rest, the same gesture is played again, but shorter this time--233 sixteenth notes. Then another sixteenth rest precedes a third gesture taking 144 sixteenth notes. There are twelve gestures like this, each getting shorter according to the Fibonacci series down to a one-sixteenth-note gesture. The rests between each gesture (#thevoid) get progressive longer according to the same series.

These rests between each piano/percussion/guitar gesture are filled in by harmonic series chords in the winds, strings, and sopranos. Conceptually, I just wanted static surface texture to contrast with the active sixteenth-note surface. However, as the piano's active texture is colored by the percussion and electric guitar, the static-texture interruption is also elaborated somewhat. The primary static material is found initially in the bassoon and clarinet (though these may change later in the introduction--it's not finished yet). The first static gesture is only one sixteenth note, so in order to avoid it blending too much into the piano/percussion/guitar texture, I orchestrated the event with some higher-frequency resonance. This resonance is found in the flute and string harmonics, and it is sustained somewhat longer than the single sixteenth note played by the bassoon and clarinet. The singers, too, project this idea of resonance with even longer (approximately two measures) passages of unisons and close-voiced harmonies that slowly change.

As the static gestures get longer they come to dominate the surface of the music. From a position of practicality the resonances must either get shorter (because the time between gestures is getting shorter) or begin to wash over the beginning of the next gesture. I will play with this, probably alternating between abrupt changes with no resonance and resonances that become asynchronous with the static event rhythm (think of waves crashing irregularly on a beach). The nature of the soprano parts as harmonically dynamic resonances will begin to change to more static material that will eventually lose prominence to the strings and winds, which will gradually become more active. The sopranos' movement toward stasis will foreshadow the beginning of sopranos' large-scale gesture, which begins quite statically. The winds/strings' growing prominence will signal, by the end of the introduction, the beginning of isorhythmic texture that will dominate those instruments' large-scale gesture. The piano/percussion/guitar part, with its curves in pitch space, foreshadows the tempo curves that will dominate the behavior of those instruments later.