work with mark: old system design2

another thing mark d'inverno and i did was to try to list all the things our musical method agents possibly could do. this was of course an impossible task but still it gave us an overview and was a pretty fun and crazy project.

version 040511 /fredrik

CHORD:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
* transpose a note tone up/down
* transpose some notes tone up/down
* transpose all notes tone up/down
* transpose a note octave up/down
* transpose some notes octave up/down
* transpose all notes octave up/down
* making more/less dissonant by transposing notes up/down
* shift by inverting
* inverting parts of the chord
* removing highest/lowest note
* removing middle note
* removing every other note
* removing dissonances
* adding the whole chord octave up/down
* adding higher/lower note
* adding note in the middle
* adding notes in between
* adding dissonant notes
* detune a note up/down from current tuning (eg +10 cents)
* detune some notes up/down from current tuning (eg +10 cents)
* detune all notes up/down from current tuning (eg +10 cents) (ie pitchbend whole chord)

* transpose a note tone up/down in current modus
* transpose some notes tone up/down in current modus
* transpose all notes tone up/down in current modus
* making more/less dissonant by transposing notes up/down in current modus
* shift by inverting in current modus
* inverting parts of the chord in current modus
* removing root of current modus
* removing middle notes in current modus (eg 3rd, 5th)
* removing extension notes in current modus (ie E13#11 -> E9#11 -> E9 -> E7 -> E)
* adding higher/lower note in current modus
* adding note in the middle in current modus
* adding notes in between in current modus
* adding extension notes in current modus (ie E -> E7 -> E9 -> E9#11 -> E13#11)
* adding root from another modus (eg E/A)
* adding extension chord from another modus (eg F#/E7)
* replace with parallel chord (eg C -> Am)
* detune a note up/down from current tuning to another tuning (eg from just to 14 tone equal tuning)
* detune some notes up/down from current tuning to another tuning (eg from just to 14 tone equal tuning)
* detune all notes up/down from current tuning to another tuning (eg from just to 14 tone equal tuning)

* replace with chord sequence current modus (eg II-V7-I)
* replace with chord sequence from another modus
* arpeggiate up/down
* rhythmisize some notes in sequence
* rhythmisize all notes in sequence
* rhythmisize some notes in parallel
* rhythmisize all notes in parallel
* change duration of a note
* change duration of some notes
* change duration of all notes

* replace with chord sequence current modus (eg II-V7-I) in current time
* replace with chord sequence from another modus in current time
* arpeggiate up/down in current time
* rhythmisize some notes in sequence in current time
* rhythmisize some notes in parallel in current time
* rhythmisize all notes in sequence in current time
* rhythmisize all notes in parallel in current time
* change duration of a note in current time
* change duration of some notes in current time
* change duration of all notes in current time

* change volume/attack/decay/sustain/release of a note
* change volume/attack/decay/sustain/release of some notes
* change volume/attack/decay/sustain/release of all notes

* change timbre/instrumentation of a note
* change timbre/instrumentation of some notes
* change timbre/instrumentation of all notes

* change position in space for a note
* change position in space for some notes
* change position in space for all notes

MELODY:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
* transpose a note tone up/down in or outside current modus
* transpose some notes tone up/down in or outside current modus
* transpose all notes tone up/down in or outside current modus
* transpose a note octave up/down
* transpose some notes octave up/down
* transpose all notes octave up/down
* invert melody in or outside current modus
* scale interval range in or outside current modus (ie shrink or expand)
* transpose a note to match another modus
* transpose some notes to match another modus
* transpose all notes to match another modus
* replace a note with a few others in or outside current modus
* replace some notes with a few others in or outside current modus
* detune a note up/down from current tuning (eg +10 cents)
* detune some notes up/down from current tuning (eg +10 cents)
* detune all notes up/down from current tuning (eg +10 cents) (ie pitchbend whole melody)

* remove a note (ie pause)
* remove some notes (ie pause)
* remove notes with duration < x
* remove notes with duration > y
* remove notes with duration < x and > y
* change duration of a note in or outside time
* change duration of some notes in or outside time
* change duration of all notes in or outside time
* change duration and onset of a note in or outside time (timescale)
* change duration and onset of some notes in or outside time (timescale)
* change duration and onset of all notes in or outside time (timescale whole melody)
* make duration and onset of all notes shorter and repeat (eg divide time by 2 and play twice)
* make duration and onset of all notes shorter and play a variation instead of repeating

* play melody in retrograd
* play notes in retrograd but keep rhythm/duration
* play rhythm/duration in retrograd but keep notes
* play and repeat only sections of the melody
* shift notes some steps left/right but keep rhythm/duration
* shift rhythm/duration some steps left/right but keep notes
* randomize notes but keep rhythm/duration
* randomize rhythm/duration but keep notes
* replace a note but keep rhythm/duration
* replace some notes but keep rhythm/duration
* replace all notes but keep rhythm/duration
* replace a rhythm/duration but keep notes
* replace some rhythm/duration but keep notes
* replace all rhythm/duration but keep notes

* decrease or increase the number of notes in the current scale (quantify notes ie minimal effect in istreet)
* decrease or increase the number of possible rhythms (quantify rhythms)

* change rhythm/duration continuously (eg ritardando)
* change rhythm/duration discrete (eg ritardando in time)

* repeat a note and rhythm/duration x times in or outside time (ie delay effect)
* repeat some notes and rhythm/duration x times in or outside time (ie delay effect)
* repeat all notes and rhythm/duration x times in or outside time (ie delay effect)

* rearrange notes in inc/dec order but keep rhythm/duration
* rearrange rhythm/duration in inc/dec order but keep notes

* add another voice in parallel to the melody
* add many other voices in parallel to the melody
* add another voice mirroring the melody
* add many other voices mirroring the melody in different ways
* add another standalone voice to the melody
* add many other standalone voices to the melody
* add another standalone voice contrasting the melody
* add many other standalone voices contrasting the melody

* change volume/attack/decay/sustain/release of a note
* change volume/attack/decay/sustain/release of some notes
* change volume/attack/decay/sustain/release of all notes

* change timbre/instrumentation of a note
* change timbre/instrumentation of some notes
* change timbre/instrumentation of all notes

* change position in space for a note
* change position in space for some notes
* change position in space for all notes

* reharmonize melody with 'good sounding' chords
* reharmonize melody with weird chords
* play melody in a different context
* play melody in another mood (eg sad, energetic or irritated)
* incorporate elements from other melodies
* blend two or more melodies (eg average note for note or play sections of each one)
* improvise freely over the melody

RHYTHM PATTERN:
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
* scale pattern with a constant factor in or outside time
* scale pattern with a changing factor in or outside time (eg random walk lfo for fluctuation)
* make duration and onsets in pattern shorter and repeat (eg divide time by 2 and play twice)
* make duration and onsets in pattern shorter and play a variation instead of repeating

* remove an element (ie pause)
* remove some elements (ie pause)
* remove elements with duration < x
* remove elements with duration > y
* remove elements with duration < x and > y
* change duration of an element in or outside time
* change duration of some elements in or outside time
* change duration of all elements in or outside time
* replace an element with a few others in or outside time
* replace some elements with a few others in or outside time
* replace all elements with a few others in or outside time
* add an element at random position in or outside time
* add some elements at random position in or outside time
* add an element in the middle in or outside time

* change position of an element to random in or outside time
* change position of some elements to random in or outside time
* change position of some all elements to random in or outside time (scramble pattern)

* repeat an element x times in or outside time (ie delay effect)
* repeat some elements x times in or outside time (ie delay effect)
* repeat all elements x times in or outside time (ie delay effect)

* play pattern backwards
* play and repeat only sections of the pattern
* rearrange elements in inc/dec duration order

* quantise an element to current time
* quantise some elements to current time
* quantise all elements to current time

* add another voice with different timbre/instrumentation in parallel to the pattern
* add many other voices with different timbre/instrumentation in parallel to the pattern
* add another voice mirroring the pattern rhythmically
* add many other voices mirroring the pattern in different ways
* add another standalone voice to the pattern
* add many other standalone voices to the pattern
* add another standalone voice contrasting the pattern
* add many other standalone voices contrasting the pattern

* change volume/attack/decay/sustain/release of an element
* change volume/attack/decay/sustain/release of some elements
* change volume/attack/decay/sustain/release of all elements

* change timbre/instrumentation of an element
* change timbre/instrumentation of some elements
* change timbre/instrumentation of all elements

* change position in space for an element
* change position in space for some elements
* change position in space for all elements

* vary the pattern based on some scheme (eg nick's bbcut)
* play pattern in another mood (eg sad, energetic or irritated)
* incorporate elements from other patterns
* blend two or more patterns (eg average elements and threshold or play sections of each one)
* improvise freely over the pattern

EFFECTS: (very much in progress)
''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
* delay: dubdelay, looping with infinite delay
* filter: high/band/low pass, ringing filters
* panning: surround
* timestretch
* pitchshift
* segmenting/cutting: warp, scratch
* phase modulation
* amplitude modulation: tremolo, lfo clipping/gate, ringmodulation
* mixing with another soundv
* frequency modulation: vibrato
* distortion: overdrive, bitcrunch
* fft manipulations: convolution, vocoder
* limiter, expander, compressor, gate
* feedback: modulate local amp, phase, freq etc.
* grain: segment with different envelopes, panning, amplitude, frequency etc.
* amplitude follower and map to another sound
* pitch tracker and map to another sound

work with mark: old system designs

so mark d'inverno and i worked on quite a few different systems. they all differed but the main goal remained fix: we wanted a responsive music system built from a multi-agent system approach.

the first ideas circled around a multi-agent band.

We originally consider the idea of a multi-agent band, but this was soon dismissed because of the complexity involved with feedback. How does one agent perceive the output of another agent; in effect how do we give that agent ears? The only possibility is to allow one agent to look at some portion of the code that is being generated but it is not clear how you could do this with any semblance of replicating the real-life improvising group of musicians.

Some questions that arose in considering how to build such a band. How aware should the agents be of its fellow musicians? What interplay do we allow between musicians and how to we facilitate this? Should there be one agent per style/piece/genre or is the agent an all round virtuous that can play in different styles/genres? Does an agent know what notes to play by itself or is it handed a score and told what and when to play? Is the agent itself responsible for different manipulations and effects to the sound it generates, or are there other agents deciding this? Perhaps, there is a project for someone else out there?

we abandoned that and tried to simplify a bit. the design we came up with next is sketched out here...

and our ideas around its design was as follows...

A basic design

In order to address some of the issues and motivations outlined in this document we propose a multi-agent architectures with the following design elements.

1. It would be responsive to the external environment: time, date, day, season, temperature, humidity, sunlight, individual and collective human actions, rainfall, wind, ambient sound level of what is happening (should be able to record this sound and play it back).

2. It would not be beat-based per se, but there might be beat elements. Rhythm more seen as an important parameter along with others - not being the predominant one.

3. We are interested in exploring the notion of harmony and melody and how this relates to the emotional state of a user. Naturally, we also want to build something aesthetically pleasant.

4. We will employ a multi-agent system architecture to manage the different elements that can take place in a music system. Agents will negotiate and compromise for control over modifying parameters and using various hard-coded methods, and hence the systems overall output.

Interface agents will monitor some aspects of the environment and may communicate with each other. We will build one agent for each different environmental parameter we wish to measure. It may be that these agents are requested to look for certain things in the environment. For example the human activity agent might be asked to look out for a certain pattern of behaviour or to notify when the number of humans goes above a certain threshold. In other words these agents are more than sensors and can change what they are trying to perceive after receiving information from other agents.

Abstract task agents will be able to collect information from some or all of the interface agents. But they will have different features and goals and will therefore need to negotiate to resolve conflicts. For example, they might agree to take it in turns to chose when there is a conflict over what should happen to the output next.

We have identified several possible abstract task agents

1. A responsive agent that wishes to respond to external stimulus

2. A control agent who wants to try and provide a user with a desired or intended response

3. A Generative agent who will try to negate and provide counter-intuitive or meaningless responses, and also try to kick-start or trigger events even when there is no external stimulus currently taking place.

4. A melody agent who tries to create an aesthetic piece giving it's understanding of traditional harmony and melody. It may work with the generative agent in many cases, asking for input and denying or accepting the ideas based on it's own rules about what is appropriate.

5. We could have a harmonising agent that attempts to provide a harmonisation of a particular piece too?

6. A mood agent wants to resonate environment mood - both 'get' and 'set'.

7. A historical agent wants to repeat things happened before and maybe start record sounds when there are drastic changes in the interface agents and so on.

These agents all have a notion of the current state. These then negotiate to call the various methods at its disposal. The method agents who have very specific abilities such as a low pass filter, changing harmonic density, playing sound samples of a specific category, adding overtones. As these methods have an effect on the suitability of each other they should negotiate first. These agents do not have a notion of the current state only some idea of their possibly effects on each other. We believe there is a relationship between mood and method and we will try and harness this to build a sound device which has the basic attributes we described at the beginning of this document.

Towards a Prototype System

The Interface Agents
One restriction to put upon our system is to quantise time and let the interface agents work at three distinct time scales; short, medium and long. This restriction would be general for all agents interfacing the external environment.

For example, the agent aware of light could keep abrupt changes in its short time memory e.g. someone flicks the light switch or passes by a window, the amount of daylight is stored in its medium term memory and last the agent's long time memory keeps track of seasonal changes, full moon etc.

The agent responsible for monitoring human activity should work in a similar way. Short term would here be gestures, medium: amount of people and their activity in the environment and long term memory is general use and popularity of the system, people's habits, change of office hours etc.

The temporal agent agent might in its slot for short time memory keep hours, minutes and seconds. Medium scale memory could contain time of day and what weekday (sunday-morning/monday-lunch/firday-night) and long term the time of year (winter/late-spring/early-autumn...) Wind, heat, humidity and the rest of the interface agents would all work in a similar way.

Internally these different scales would poll each other for information, averaging and calculating their respective summary short-medium-long. For medium and long term memory logfiles will be written to disk for backup. Depending on which mappings we want to do and what results we want from the interface part of the system, the agents here need to communicate with each other in different ways. E.g. if we need some musical parameter to be changed (or action to be taken) when the room is crowded and hot, we could implement that to the human interface agent. It would utilise the motion tracking sensor to see if there are - presently - many people about, look in its medium term memory to figure out if there's a tendency/crowd gathering and also communicate with the heat agent to see if it has similar input.
There can also be a direct one-to-one mapping between the agent's discrete points in time and some musical parameters. How many and which parameters we choose to control here will decide how directly responsive the system output will be. Possibly the degree of direct mapping (mostly concerning short-time) can be varied over time. For a first-time user it might be a good thing if the direct feedback is slightly exaggerated. He/she would like to get instant gratification to become comfortable with that the system really is working, alive and reacting. But after time - to keep interest up - other things could become more important like the combination of sensors or 'musical' music progression. These direct mappings could also be present all the time but scaled up/down and mixed with other controllers.

[...]

The actual data from the sensors can be handled in two different ways and hopefully we can try out a combination of both. The first way would be to automatically normalise incoming values to a range of say 0.0-1.0. So if the program detects a peak greater than the previous maximum peak, it will replace the old with the new value and start using that as scaling factor. This will make the sensors adapt to any environment and its extreme conditions.
The other way of handling the input would be to assume a probable range for each sensor and just clip extreme values. The advantage here is that the system won't be less responsive over time (eg. some vandal screams in the microphone and sets the peak level to an unreasonable value - making the microphone non-sensitive to subtle background noise amplitude later on). The drawback is that the system needs to be tuned for each new location and that over a longer period of time. Ideal is a combination of the two that does adapt directly but also falls back to some more reasonable default or average if someone messes with it or something breaks (i.e. will disregard totally unlikely extreme peaks).
After normalisation we will scale the inputs from the different sensors. This will allow us to tune our system and change weight of importance for each interface agent. But to begin with we'll just assume equal power for all sensors.

It is our aim to build a flexible and modular system that can be installed in many different environments/locations and with changing budgets. So if sensors aren't present, breaks or have to be exchanged for some other type, we only instantiate a new interface agent reading from that particular sensor or device. The system should run with any number of interface agents and any number of sensors.

We also see the possibility of adding, to the list of interface agents, a few 'proxy' interface agents. These would work for any device or stream of data and would look for abrupt changes, tendencies and overall activity (at three discrete times). The users would decide what to read from. Examples of input for these proxies could be home-built sensors that the users bring with them and plug into a slot, some device or installation already present nearby the location where our system is set up or maybe stock market data downloaded from the net. Having these proxies would make each installation unique and site specific also on the hardware input side.

Implementation of the interface agents will be done by having an abstract superclass (pseudo code below):

InterfaceAgent {        //abstract class
        short {
                //method looking for quick changes like gestures and transients
        }
        medium {
                //method using this.short to calculate tendencies
        }
        long {
                //method using this.medium to find out about long term use and overall activity
        }
}

Then the actual interface agent classes will all inherit behaviour from this superclass.

Wind : InterfaceAgent
Light : InterfaceAgent
Humans: InterfaceAgent
Proxy : InterfaceAgent

etc.

If time permits we'd also like to try to implement an agent that listens to the output of the system itself in an aesthetic way. It should evaluate the resulting music as groovy, soft, good, bad or less interesting. Machine listening is of course a huge project in itself but some rough presumptions could be done with the help of the other interface agents. A change in the music that for instance instantly empties the room of people should be considered appropriate. So there's already the microphone(s) listening to the sound output in different ways (amplitude, pitched sounds) but a more intelligent analysis of the resulting music would be a good thing that'd boost the complexity of the whole system by introducing yet more feedback.

The Abstract Task Agents and Method Agents
How to implement a task agent like mood? Where will the rules reside defining an emotion like happiness - in each of the method agents or within the task agent itself - or both? Below are two suggested implementations with corresponding bare-bone sounding examples.

1. Method agents are responsible for how to best be happy/sad.

In this example the method agents know themselves how to best reflect an emotion. E.g. lowpass agent understands the message 'happy' (coming from the task agent) and reacts to that by increasing cutoff frequency 300 Hz. Likewise a 'sad' message would decrease cutoff frequency by 300. Another rule could be a melody agent that, when receiving a 'happy' message, changes its currently playing melody to major key and raises its tempo a little.

Simplest possible sounding example written in SuperCollider:

Starting out with three class definitions:

MethodAgent {                                           //abstract superclass
        var &gt;synth;
        *new { arg synth;
                ^super.new.synth_(synth);
        }
        update { arg param, value;
                synth.set(param, value);                //send parameters to the synth
        }
}
Mlowpass : MethodAgent {                        //lowpass agent subclassing MethodAgent
        var freq= 700;
        happy {
                freq= (freq+300).clip(100, 9000);       //rule 1
                this.update(\freq, freq);
        }
        sad {
                freq= (freq-300).clip(100, 9000);       //rule 2
                this.update(\freq, freq);
        }
}
Mmelody : MethodAgent {                         //melody agent subclassing MethodAgent
        var third= 64, rate= 2;
        happy {
                third= 64;                                              //rule 3
                rate= (rate/0.9).clip(0.1, 10);         //rule 4
                this.update(\third, third.midicps);
                this.update(\rate, rate);
        }
        sad {
                third= 63;                                              //rule 5
                rate= (rate*0.9).clip(0.1, 10);         //rule 6
                this.update(\third, third.midicps);
                this.update(\rate, rate);
        }
}

In the above code rule 1 says: when happy - increase lowpass cutoff frequency by 300 but restrain values between 100 and 9000. Rule 3 would be: when happy - set the third scale position of the melody to be a major third. Rule 6: when sad- decrease melody tempo 10% but restrain values to between 0.1 and 10 (beats-per-second).

To try it out we first need to define two synths - one playing a lowpass filter and another one playing a simple melody.

s.boot; //start the supercollider sound server
a= SynthDef(\lowpass, {arg freq= 700; ReplaceOut.ar(0, LPF.ar(In.ar(0), freq))}).play(s);
b= SynthDef(\melody, {arg third= 329.63, rate= 2; Out.ar(0, Saw.ar(Select.kr(LFNoise0.kr(rate, 1.5, 1.5), [60.midicps, third, 67.midicps]), 0.1))}).play(s);

Then we create our two task agents.

x= Mlowpass(a);         //create an abstract task agent and pass in a synth that plays a lowpass filter
y= Mmelody(b);          //create an abstract task agent and pass in a synth that plays a simple melody

The actual mood messages are then sent in the following manner (imagine this done from the mood agent):

x.happy; y.happy;               //send message 'happy' to both task agents
x.sad; y.sad;                   //send message 'sad' to both task agents

This design will make the task agents less bloated and it will be easy to change, add or remove method agents. The mood agent will just tell all available method agents to become happy and it can then focus on negotiation with other task agents.

2. Task agent is responsible for how to best be happy/sad.

This is exactly the same code example as above but rewritten to gather all our rules defining happy and sad inside the mood agent. First four new class definitions:

AbstractTaskAgent {
        var &gt;lpass, &gt;melody;
        *new { arg lpass, melody;
                ^super.new.lpass_(lpass).melody_(melody);
        }
}
Mood : AbstractTaskAgent {                      //mood agent subclassing AbstractTaskAgent
        happy {
                lpass.freq= (lpass.freq+300).clip(100, 9000);   //rule 1
                melody.third= 64;                                               //rule 3
                melody.rate= (melody.rate/0.9).clip(0.1, 10);   //rule 4
        }
        sad {
                lpass.freq= (lpass.freq-300).clip(100, 9000);   //rule 2
                melody.third= 63;                                               //rule 5
                melody.rate= (melody.rate*0.9).clip(0.1, 10);   //rule 6
        }
}
Mlowpass2 : MethodAgent {               //different lowpass agent subclassing MethodAgent
        var &lt;freq= 700;
        freq_ { arg val;
                this.update(\freq, val);
        }
}
Mmelody2 : MethodAgent {                //different melody agent subclassing MethodAgent
        var &lt;third= 64, &lt;rate= 2;
        third_ { arg val;
                this.update(\third, val.midicps);
        }
        rate_ { arg val;
                this.update(\rate, val);
        }
}

Here is the same code as above for defining two synths.

s.boot; //start the supercollider sound server
a= SynthDef(\lowpass, {arg freq= 700; ReplaceOut.ar(0, LPF.ar(In.ar(0), freq))}).play(s);
b= SynthDef(\melody, {arg third= 329.63, rate= 2; Out.ar(0, Saw.ar(Select.kr(LFNoise0.kr(rate, 1.5, 1.5), [60.midicps, third, 67.midicps]), 0.1))}).play(s);

And this is how the mood agent is set up.

z= Mood(Mlowpass2(a), Mmelody2(b));

Last we send the messages to the mood agent like this:

z.happy;
z.sad;

Here method agents are very simple and only does what told. They can return their state and change the sound that they're in control of and that is it. It is the mood agent that knows what happiness means and will direct the methods agents to do certain things.
A good thing with this design is that the rules defining happiness are all in one place and it would be possible to write different versions of the mood agent that implements happiness in different ways. One drawback would be that for any change to the method agents, we would have to update and rewrite parts of the mood class.

Presently it seems like suggestion number two would be the design most suitable for our needs. This would mean that crosstalk between method agents isn't needed anymore as suggested in version one (see sketch#1). Conflicting or overlapping methods are rather dealt with by the task agents as they know which actions to take to come up with the desired result. On the other hand the method agents need to be able to report their state back to the method agents and also be intelligent enough to take their own actions e.g. telling when certain tasks are finished.

august practice sessions

nick and i made a pact to do 1hour of live coding practice for each day of august. julian joined in too on a couple of days.

here are rendered mp3s of my practising sessions. music quality varies...

they're all coded from scratch (synthesis, patterns etc) in ~1hour using plain jitlib. the sourcecode is available here... code
(you might need to copy and paste or select all to see it as i accidentally colorised all the documents)
so rather than listening to the static mp3s above, i'd recommend to download supercollider and play [with] the code.

i think i've identified two problems. first the getting-started and then the getting-on-when-you-finally-got-started.
in the beginning there's always the empty document... uhu. so setting up a little task to solve or having an idea of some soundscape i'd like to create seemed to help. it was usually ideas like use fm-synthesis today or make soft drone sounds tonight. still, many times it started out sounding just awful and i got more and more stressed trying fix it. any livecoders recognise this feeling? ;-) but pulling some tricks (usually distortion or delay effects :-) and focusing in on a few details - muting other sounds - i think i managed shape all the code to become somewhat okey sounding in the end. i only reset the clock and started all over from scratch twice.
and then when i got nice processes going, it has been hard to let go of that and continue with something different. so the resulting music above is most often just a single 'theme' or process. i feel i'd have to rehears a lot more to be able to do abrupt form changes or to have multiple elements to build up bigger structures over time with. i soft of got stuck in the a of the aba form. it would've been nice to go abac etc. perhaps that the 1h time limit made you reluctant to start something new 45minutes in as that'd have probably been left hanging.
































work with mark: job as a research fellow

between march 2004 and march 2006 i had the fantastic opportunity to work as a research fellow for prof. mark d'inverno (http://www2.wmin.ac.uk/~dinverm/). it was a part time engagement at the math department at University of Westminster : cavendish school of computer science in london.
mark is a math professor specialising in intelligent agents and also a great pianist. our goal was to combine his (and mine) two big interests by creating musical agents that could improvise, jam and play together.
i learned a lot in the process and had to read up on math, specification languages, agent theories, genetic algorithms etc. i found so much interesting stuff out there to get totally absorbed in. this blog is partly started to document our research.

Pages

Subscribe to f0blog RSS