Performing an electronic composition

Performing an electronic composition

Bendik Hovik Kjeldsberg is studying composition at the same Academy as I am doing my master degree.

He asked me to perform one of his compositions with him.

Since the composition’s nature is very electronic it presented some challenges when we were to perform it.

Listen to the composition here, keep in mind that this is intended to be a score and that the actual composition is what we perform based on this:

After some intents to solve this with Ableton Live, I decided the best way would be to just use MaxMsp for this project. I’ll elaborate later in the following text.

The first “blipp”-sounds that appear I wanted to play by triggering them one at a time in a sequence. In Ableton I could slice the track to MIDI on it’s transients, then extract each section, put them one after another in an audio track as clips, than map the MIDI-signal generated from the amplitude of the input signal to “next scene” and set global quantization to “none”. But it is quite self-explanatory that this is not particularly favorable.

As an alternative I tried to figure out how to do transient-detection on an audio file in Max, and then play the transients one after another, but I ended up manually putting all the transients locations in millisecond format in a [coll] object, then triggering them one at a time trough a [play~] object.

I’ve been looking into the [slice~] object, but I haven’t quite figured it out yet.

I’m extremely open for suggestions on this subject.

Our main intention with the performance of the piece was that the sounds Kjeldsberg had created in advance should be our instrument and what we performed should only be based on the mentioned composition and not contain any backing tracks. We achieved this using the following methods:

– All the sounds Kjeldsberg wanted to use where broken apart to as small pieces as we saw convenient.

– All drum sounds that I could manage to play at one time where removed from Kjeldsbergs setup.

– Sounds that were layered with these drum sounds were triggered by the corresponding drum using this max patch I made.

– Since the mentioned layered sounds changed during the performance, I added the ability to in real time change the sound that were to be triggered.

Kjeldsberg wanted to play both a melody line and a bass line at the same time with one one hand while triggering samples with the other hand.

Since the melody and bass line were not unison, that presented a challenge.

Bendik wanted to play the top melody line shown in the picture bellow while triggering the corresponding bass notes. We could of course have triggered a clip containing the bass notes in a certain tempo he would have to follow, but then we would loose his unique phrasing.

So I made a Max for Live patch that triggers each notes corresponding bass notes (some of the top notes are harmonized with different bass notes) in the right sequence, only problem is that the performer have to press a “reset sequence”-button for each time he or she plays it. I could have had the sequence restart when the last note is played, but we figured out that this was the easiest and most convenient method in this context.

I enjoy this way of interpreting a composition since it’s a lot like how I’ve been working earlier when extracting sounds from pieces I like. But then again this IS the actual composition as the composer intended it to be, in earlier examples I’ve only extracted sounds to make them part of a new, improvised piece.

In a way this is like going back to the original way of composing, where the composer didn’t make the final auditive experience, but rather a score for the performers or conductor to interpret. And since there are not yet instruments capable of making most of these sounds, we have to modify our own instruments to perform the composers intentions. Again, this is a fairly strange thought, since it is some how comparable to that some one writing a piano piece before the piano was even invented. This also incorporates the role of the instrument maker into what’s expected from the performer and/or the composer.

Here is a video from a concert where we perform the piece (the picture gets more descriptive at 2:00):

Leave a Reply