I made a short demo/poster session at the LAM conference on 19 December 2006 in London. See livealgorithms.org (archive.org)).
Below is the handout describing the toolkit.
This toolkit is now distributed via SuperCollider's package system quarks. All open source.
How to install:
Quarks.install("redUniverse");
RedUniverse - a simple toolkit
Mark d'Inverno & Fredrik Olofsson
This is basically a set of tools for sonification and visualisation of dynamic systems. It lets us build and experiment with systems as they are running. With the help of these tools, we can quickly try out ideas around simple audiovisual mappings, as well as code very complex agents with strange behaviours.
The toolkit consists of three basic things... Objects, Worlds and a Universe. Supporting these are additional classes for things like particle systems, genetic algorithms, plotting, audio analysis etc. but preferably many of these functions you will want to code your self as a user.
We have chosen to work in the programming language SuperCollider (www.audiosynth.com) as it provides tight integration between real-time sound synthesis and graphics. It also allows for minimal classes that are easy to customise and extend. SuperCollider is also open for communication with other programs and it runs cross-platform.
So to take full advantage of our toolkit, good knowledge of this programming language is required. We do provide help files and examples as templates for exploration, but the more interesting features, like the ability to live-code agents, are hard to fully utilise without knowing this language.
Detailed overview
In SuperCollider we have the three base classes: RedObject, RedWorld and RedUniverse.
RedObject - things like particles, boids, agents, rocks, food etc.
RedWorld - provides an environment for objects.
RedUniverse - a global collection of all available worlds.
Objects all live in a world of some sort. There they obey a simplified set of physical laws. They have a location, velocity, acceleration, size and a mass. They know a little about forces and can collide nicely with other objects.
Pendulums are objects that oscillate. They have an internal oscillation or resonance of some sort.
Particles are objects that ages with time. They keep track of how long they have existed.
Boids are slightly more advanced particles. They have a desire and they can wander around independently seeking it.
Agents are boids that can sense and act. They also carry a state 'dictionary' where basically anything can be stored (sensory data, urges, genome, phenome, likes, dislikes, etc). Both the sense and act functions, as well as the state dictionary, can be manipulated on the fly. Either by the system itself or by the user in runtime.
Worlds provide an environment for the objects. They have properties like size, dimensions, gravity etc and they also keep a list of all objects currently in that world.
For now, there are three world classes:
RedWorld - endless in the sense that objects wrap around its borders.
RedWorld2 - a world with soft walls. Objects can go through but at a cost. How soft these walls are and how great the cost depends on gravity and world damping.
RedWorld3 - a world with hard walls. Objects bounce off the borders - how hard depends on gravity and world damping.
The Universe is there to keep track of worlds. It can interpolate between different worlds. It can sequence worlds, swap and replace, and also migrate objects between worlds. All this while the system is running.
The RedUniverse class also does a complete system store/recall to disk of all objects and worlds.
So the above are the basic tools. They should be flexible enough to work with e.g. objects can live in worlds of any number of dimensions. But as noted, one can easily extend the functionality of these classes by subclassing.
Conclusion
How the objects and worlds behave, sound and look like are open for experimentation. That is, this is left for the user to code. So while there is great potential for customisation, it also requires more work from its users.
The RedUniverse as a whole tries not to enforce a particular type of system. E.g. one can use it purely without any visual output or vice-versa.
We see it both as a playground for agent experiments as well as a serious tool for music composition and performance. We hope it is simple and straightforward and while there is nothing particularly novel about it, we have certainly had fun with it so far. Foremost it makes it easy to come up with interesting mappings between sound and graphics. In a way, we just joyride these simple dynamic systems to create interesting sounds.
The software and examples will be available online on the LAM site. Of course as open source.
I recently implemented something Nick Collins and I discussed a long time ago (SC2 era - custom event class). It is a 'hack' of the default synth in SuperCollider. That is the one that many of the help and example files uses. So when you install my class, the default file will be overwritten and all the slightly daft pattern examples will from there on spring into new life.
Install it and then run some examples. Most of the ones in Streams-Patterns-Events5 and Streams-Patterns-Events6 work very well. See the RedDefault help file for more info.
(And yes, it is easy to uninstall and get back to the boring default synth)
Just to compare - here's first an example taken from a help file playing on the default synth...
Not only does it create a new synthesiser it also changes duration, attack/release times, amplitude etc. The pitches are mapped to a diminished chord in a somewhat strange way: the slower the duration - the greater the leap between the notes to quantise to. For example, if half or whole notes, only octaves will be heard.
Updates:
111116: redDefault is no longer a quark. It's available on the page: /code/sc/#classes
I recently made my first short video for Skare. We like to make things a little bit complicated for ourselves and we also have a hook up on ice, snow and all other variations on cold water.
First - to get some cheap audiovisual correlation - I put an old CD in the freezer for two weeks. Then one night I took it out and placed it over the bass element of a speaker. As the piece of plastic slowly adapted to room temperature, I let it vibrate to the deep fat bass found in the track 'To the Other Shore' (released on glacial movements). This was all filmed twice, close up and in night shot mode.
I then wrote a little MaxMSPJitter patch that mixed the two takes, matched it with the audio file and saved the whole thing to disk. The resulting video is on http://www.inhospitable.se/skare/.
Today I wrote a help file for an old SuperCollider class I had laying around. It simulates old telephone DTMF signals. Pretty silly must say and I forgot why I created it in the first place. But I bet someone can find some strange use for it. Jim?
RedDTMF is part of redSys quark. Install it by typing...
Quarks.install("redSys");
//and then recompile
s.boot;
{RedDTMF.dial("12AA34", 4)}.play;
I also spent time at UoW learning about genetic algorithms and genetic programming. Mainly from John H Holland's books and Karl Sims' papers. I found it all very interesting and inspiring and again I got great help and input from Rob Saunders.
One of our ideas was to construct synthesis networks from parts of our agents' genomes i.e. to have the phenomes be actual synths that would synthesise sound in realtime. The first problem to tackle was a really hard one. How to translate the genome - in the form of an array of floats - into a valid SuperCollider synth definition?
Of course, there are millions of ways to do this translation. I came up with the RedGAPhenome class which works with only binary operators, control and audio unit generators. Unfortunately, there can be no effects or modifier units. On the other hand, the class is fairy flexible and it can deal with genomes of any length (>=4). One can customise which operators and generators to use and specify ranges for their arguments. One can also opt for the topology of the synthesis network (more nested or more flat).
There is no randomness involved in the translation, so each gene should produce the exact same SynthDef. Of course, generators involving noise, chaos and such might make the output sound slightly different each time but the synthesis network should be the same.
This class produces a fantastic range of weird synths with odd synthesis techniques, and it is useful just like a synth creation machine on its own. Here are some generated synths... n_noises, n_fmsynths, and corresponding 5sec audio excerpts are attached below.
Then, after the struggle with the phenome translation, the code for the actual genetic algorithms was easy to write. The genome and its fitness are kept in instances of a class called RedGAGenome, and the cross-breeding and mutation are performed by the class RedGA. There are a couple of different breeding methods but I found the multi-point crossover one to give the generally best results. All the above classes and their respective help files and examples are available on the page: /code/sc/#classes. And there are many more automatically generated synths in the attached krazysynths+gui.scd example below.
I also made a couple of fun example applications stemming from this. One is a six-voice sequencer where you can breed synths, patterns and envelopes. It is attached as 'growing soundsBreedPatternEnv.scd' below. (Note that the timing is a bit shaky. I really should rewrite it to run on the TempoClock instead of the AppClock.)
Taking the Intelligent Street project further, Mark d'Inverno wanted me to try to get it online. The idea was to let people surf to a webpage, send commands to a running iStreet system and hopefully collaborate with other online users to compose music. Just like in the original SMS-version of the piece, everybody could make changes to the same music and the result would be streamed back to all the online users.
The first prototype was easy to get up and running. We decided to use Processing and write a Java applet for the web user-interface and then stream the audio back with Shoutcast. The users would listen to the music through Itunes, WinAmp or some similar program. This, of course, introduced quite a delay before they could actually hear their changes to the music. But that was not all too bad as we had designed the original iStreet in a way that latency was part of the user experience :-)
(The commands sent there via SMS took approx 30 seconds to reach us from the Vodafone server and that could not be sped up. The users were given no instant control over the music - rather they could nudge it in some direction with their commands.)
So our internet radio/Shoutcast solution worked just fine and we had it up and running for a short while from Mark's house in London.
That was a total homebrew solution of course. We wanted it to handle a lot of visitors, deal with some hopefully high traffic, be more stable, permanent and not run on an ADSL connection.
So at UoW we got access to a OSX server cluster and I started to plan how to install SuperCollider, a webserver and iStreet on that. Little did I know about servers, network and security and I had to learn SSH and Emacs to get somewhere. Rob Saunders helped me a lot here.
Then there were some major obstacles. First of all the cluster didn't have any window manager installed - not even X11. I spent many days getting SuperCollider and Stefan Kersten's Emacs interface for sclang called scel to compile.
We also had some minor issues with starting the webserver and punching hole in the university firewall etc. but the major problem turned out to be to get the audio streaming going. I didn't have root access and wasn't allowed to install jack on the cluster. To stream, I needed a Shoutcast client and some way to get audio to that from SuperCollider. I did find OSX programs that could have worked but none would run windowless on the console. So stuck.
The only solution was to write my own streaming mechanism. The resulting SuperCollider class for segmenting audio into MP3s is available here: /f0blog/work-with-mark-istreet-recording-mp3s-for-streaming/. A Java gateway handled the communication between SuperCollider and the Java applet that would stitch these files back together. (The Java gateway program also distributed all the other network data like the chat, checking online users, pending/playing commands etc. It used NetUtil by Sciss).
Unfortunately, I never got the streaming thing to run smoothly. Nasty hickups in the sound made it impossible to listen to. The hiccups were probably partly due to my crappy coding but I think the main error was in the ESS library for Processing. Either ESS (releases 1 and 2) can't do asynchronous loading or Java is just too slow to load MP3s without dropping audio playback. Very annoying.
After that defeat, I also spent time with flash and did a little player there that could load and play back MP3s smoothly. With the help from my flash expert friend Abe we also could talk to the flash thing from my Java applet via JavaScript. But time ran out and this would have been a too complicated system anyway.
So the iStreet never made it online. But again I learned a lot about networks, Unix, Java and some tools got developed in the process. RedGUI - a set of user-interface classes for processing, ISRecord and ISGateway for SuperCollider, and the ISgateway.java.
Screenshot of the Java applet running iStreet online...
Mark d'Inverno wanted to see the Intelligent Street installation gain new live online. So for streaming the sound from iStreet over the internet, I wrote a class for SuperCollider called ISRecord. It basically records sound to disk in small MP3 segments. So any sound SuperCollider produces will be spliced into many short MP3 files that could later be sent as small packages over the internet.
The technique is to continuously save the sound into one of two buffers. When one buffer is filled, the recording swaps and continues in the other. The buffer that just got filled is saved to disk and conversion to MP3 is started. This swap-and-write-to-disk cycle should have no problems keeping up realtime recording. But as the MP3 conversion takes a little bit of extra time - depending on the quality and segmentation size etc, there is a callback message from the MP3 converter that evaluates a user-defined segAction function when the conversion is finished. Thereby one can notify other programs when the MP3 file is ready to be used.
There is also a cycle parameter that controls how many MP3 segments to save before starting to overwrite earlier ones. This is needed to not totally flood the harddrive with MP3s.
The actual MP3 conversion is done using LAME and CNMAT's sendOSC is also needed for LAME-to-SC communication.