«first  …23 24 25 26 27 28 29 last»

A Tiny Little White One

2006-10-13 15:19 supercollider

This chunk of SuperCollider code will create a tiny but not so well behaved audiovisual creature.

(I must admit I stole the title from a.berthling's album on mitek)

a tiny little white one screenshot

/*a tiny little white one  /redFrik 061009*/
 
/*
GUI.cocoa;
GUI.swing;
*/
 
(
s.waitForBoot{
    n= 25;  /*number of arms*/
    b= {Buffer.alloc(s, 32, 1)}.dup(n);  /*length must be power of 2*/
    SynthDef(\wormsnd, {|out= 0, bufnum, freq= 60, amp= 0.01, pan= 0|
        Out.ar(out, Pan2.ar(OscN.ar(bufnum, freq, 0, amp), pan));
    }).send(s);
})
 
(
    var width= 300, height= 300, freqSpread= 100.rrand(1000).postln, muckProb= 0.0008,
        muck= 0, i= 0, j= 0, shapes, synths, pnt, w, u, freq,
        centerX= width/2, centerY= height/2, o= 0.1, frict= 1, lfo= 1, lfoSpeed= 0;
    w= Window("a tiny little white one", Rect(128, 64, width, height), false);
    u= UserView(w, Rect(0, 0, width, height));
    u.background= Color.black;
    w.onClose_({synths.do{|x| x.free}});
    CmdPeriod.doOnce({w.close});
    w.front;
    shapes= {|x| {1.0.rand}.dup(b[x].numFrames)}.dup(n);  /*init shapes*/
    synths= {|x| Synth(\wormsnd, [\bufnum, b[x].bufnum, \pan, x/(n-1)*2-1])}.dup(n);
    u.drawFunc= {
        shapes.do{|shape, x|  /*iterate shapes, x is index*/
            var dist;
            if((muckProb*0.1).coin, {muck= 4.rand});
            if(muck>0, {
                ([
                    {pnt= Point(x/n*10, x/n*10)},
                    {pnt= Point(x/n* -10, x/n*10); if(muckProb.coin {muck= 0})},
                    {pnt= Point(x.rand2, x.rand2); if(muckProb.coin {muck= 0})}
                ][muck-1]).value;
                if(i%2000==0, {muck= 0});
            }, {
                pnt= Point(0, 0)
            });
            lfo= (lfo+lfoSpeed).fold(0.05, 1);
            i= i+1;
            j= (j+10.rand2).fold(0, shape.size-1);
            shape.put(j, (shape[j]+o).fold(0.01, 1));
            if(muckProb.coin, {
                o= [0.15.rand2, -1, 1].wchoose(#[0.95, 0.025, 0.025]);
                frict= [0.997.rrand(1), 0.95.rrand(1.5)].wchoose(#[0.95, 0.05]);
                lfoSpeed= 0.0001.rand2;
                [
                    #[\o, \frict, \lfo, \lfoSpeed],
                    [o, frict, lfo, lfoSpeed].round(0.0001)
                ].lace(8).postln;
            });
            o= o*frict;
            b[x].sine1(shape.clip(0.01, 1));
            Pen.strokeColor= Color.grey(x+1/n);
            Pen.moveTo(Point(centerX, centerY));
            shape.clump(2).do{|ll, k|
                var distance, angle, temp;
                #distance, angle= ll;
                pnt= Point(distance, distance).rotate(angle*2pi*lfo)+pnt;
                Pen.lineTo(
                    Point(
                        (pnt.x*10+centerX).clip(0, width),
                        (pnt.y*10+centerY).clip(0, height)
                    )
                );
            };
            Pen.stroke;
            dist= pnt.dist(0).clip(0.1, 20);  /*distance from 0, 0*/
            freq= dist/20+lfoSpeed+muck+(lfo*0.01.rand)*freqSpread+60;
            synths[x].set(\freq, freq, \amp, (1/n)*dist/20);
        }
    };
    {while{w.isClosed.not} {u.refresh; (1/30).wait}}.fork(AppClock);
)
 
b.do{|x| x.free};

Skare - New Video Online

2006-09-27 16:02 visuals

I recently made my first short video for Skare. We like to make things a little bit complicated for ourselves and we also have a hook up on ice, snow and all other variations on cold water.

First - to get some cheap audiovisual correlation - I put an old CD in the freezer for two weeks. Then one night I took it out and placed it over the bass element of a speaker. As the piece of plastic slowly adapted to room temperature, I let it vibrate to the deep fat bass found in the track 'To the Other Shore' (released on glacial movements). This was all filmed twice, close up and in night shot mode.

I then wrote a little MaxMSPJitter patch that mixed the two takes, matched it with the audio file and saved the whole thing to disk. The resulting video is on http://www.inhospitable.se/skare/.

Screenshot of one of my uglier patches...

skare totheothershore screenshot


Useless SuperCollider Class no.1

2006-09-26 14:02 supercollider

Today I wrote a help file for an old SuperCollider class I had laying around. It simulates old telephone DTMF signals. Pretty silly must say and I forgot why I created it in the first place. But I bet someone can find some strange use for it. Jim?

RedDTMF is part of redSys quark. Install it by typing...

Quarks.install("redSys");
//and then recompile
s.boot;
{RedDTMF.dial("12AA34", 4)}.play;

Work with Mark: Genetics

2006-09-25 15:23 research, supercollider

I also spent time at UoW learning about genetic algorithms and genetic programming. Mainly from John H Holland's books and Karl Sims' papers. I found it all very interesting and inspiring and again I got great help and input from Rob Saunders.

One of our ideas was to construct synthesis networks from parts of our agents' genomes i.e. to have the phenomes be actual synths that would synthesise sound in realtime. The first problem to tackle was a really hard one. How to translate the genome - in the form of an array of floats - into a valid SuperCollider synth definition?

Of course, there are millions of ways to do this translation. I came up with the RedGAPhenome class which works with only binary operators, control and audio unit generators. Unfortunately, there can be no effects or modifier units. On the other hand, the class is fairy flexible and it can deal with genomes of any length (>=4). One can customise which operators and generators to use and specify ranges for their arguments. One can also opt for the topology of the synthesis network (more nested or more flat).

There is no randomness involved in the translation, so each gene should produce the exact same SynthDef. Of course, generators involving noise, chaos and such might make the output sound slightly different each time but the synthesis network should be the same.

This class produces a fantastic range of weird synths with odd synthesis techniques, and it is useful just like a synth creation machine on its own. Here are some generated synths... n_noises, n_fmsynths, and corresponding 5sec audio excerpts are attached below.

Then, after the struggle with the phenome translation, the code for the actual genetic algorithms was easy to write. The genome and its fitness are kept in instances of a class called RedGAGenome, and the cross-breeding and mutation are performed by the class RedGA. There are a couple of different breeding methods but I found the multi-point crossover one to give the generally best results. All the above classes and their respective help files and examples are available on the page: /code/sc/#classes. And there are many more automatically generated synths in the attached krazysynths+gui.scd example below.

I also made a couple of fun example applications stemming from this. One is a six-voice sequencer where you can breed synths, patterns and envelopes. It is attached as 'growing soundsBreedPatternEnv.scd' below. (Note that the timing is a bit shaky. I really should rewrite it to run on the TempoClock instead of the AppClock.)

growing_soundsBreedPatternEnv screenshot

Ref articles:

Ref books:

  • John H. Holland - Hidden Order: How Adaptation Builds Complexity
  • Melanie Mitchell - An introduction to Genetic Algorithms
  • Richard Dawkins - The Blind Watchmaker

n_noises

n_fmsynths

Updates:

  • 101128: growing_soundsBreedPatternEnv.scd file updated, also see the post /f0blog/growing-sounds/.
  • 171229: converted some rtf files to scd and made the GUI run on latest SuperCollider (Qt)
Attachments:
krazysynths+gui.scd
growing_soundsBreedPatternEnv.scd

Work with Mark: iStreet - Online

2006-09-20 19:24 research

Taking the Intelligent Street project further, Mark d'Inverno wanted me to try to get it online. The idea was to let people surf to a webpage, send commands to a running iStreet system and hopefully collaborate with other online users to compose music. Just like in the original SMS-version of the piece, everybody could make changes to the same music and the result would be streamed back to all the online users.

The first prototype was easy to get up and running. We decided to use Processing and write a Java applet for the web user-interface and then stream the audio back with Shoutcast. The users would listen to the music through Itunes, WinAmp or some similar program. This, of course, introduced quite a delay before they could actually hear their changes to the music. But that was not all too bad as we had designed the original iStreet in a way that latency was part of the user experience :-)

(The commands sent there via SMS took approx 30 seconds to reach us from the Vodafone server and that could not be sped up. The users were given no instant control over the music - rather they could nudge it in some direction with their commands.)

So our internet radio/Shoutcast solution worked just fine and we had it up and running for a short while from Mark's house in London.

That was a total homebrew solution of course. We wanted it to handle a lot of visitors, deal with some hopefully high traffic, be more stable, permanent and not run on an ADSL connection.

So at UoW we got access to a OSX server cluster and I started to plan how to install SuperCollider, a webserver and iStreet on that. Little did I know about servers, network and security and I had to learn SSH and Emacs to get somewhere. Rob Saunders helped me a lot here.

Then there were some major obstacles. First of all the cluster didn't have any window manager installed - not even X11. I spent many days getting SuperCollider and Stefan Kersten's Emacs interface for sclang called scel to compile.

We also had some minor issues with starting the webserver and punching hole in the university firewall etc. but the major problem turned out to be to get the audio streaming going. I didn't have root access and wasn't allowed to install jack on the cluster. To stream, I needed a Shoutcast client and some way to get audio to that from SuperCollider. I did find OSX programs that could have worked but none would run windowless on the console. So stuck.

The only solution was to write my own streaming mechanism. The resulting SuperCollider class for segmenting audio into MP3s is available here: /f0blog/work-with-mark-istreet-recording-mp3s-for-streaming/. A Java gateway handled the communication between SuperCollider and the Java applet that would stitch these files back together. (The Java gateway program also distributed all the other network data like the chat, checking online users, pending/playing commands etc. It used NetUtil by Sciss).

Unfortunately, I never got the streaming thing to run smoothly. Nasty hickups in the sound made it impossible to listen to. The hiccups were probably partly due to my crappy coding but I think the main error was in the ESS library for Processing. Either ESS (releases 1 and 2) can't do asynchronous loading or Java is just too slow to load MP3s without dropping audio playback. Very annoying.

After that defeat, I also spent time with flash and did a little player there that could load and play back MP3s smoothly. With the help from my flash expert friend Abe we also could talk to the flash thing from my Java applet via JavaScript. But time ran out and this would have been a too complicated system anyway.

So the iStreet never made it online. But again I learned a lot about networks, Unix, Java and some tools got developed in the process. RedGUI - a set of user-interface classes for processing, ISRecord and ISGateway for SuperCollider, and the ISgateway.java.

Screenshot of the Java applet running iStreet online...

istreet online screenshot


Work with Mark: iStreet - Recording MP3s for Streaming

2006-09-20 14:52 research, supercollider

Mark d'Inverno wanted to see the Intelligent Street installation gain new live online. So for streaming the sound from iStreet over the internet, I wrote a class for SuperCollider called ISRecord. It basically records sound to disk in small MP3 segments. So any sound SuperCollider produces will be spliced into many short MP3 files that could later be sent as small packages over the internet.

The technique is to continuously save the sound into one of two buffers. When one buffer is filled, the recording swaps and continues in the other. The buffer that just got filled is saved to disk and conversion to MP3 is started. This swap-and-write-to-disk cycle should have no problems keeping up realtime recording. But as the MP3 conversion takes a little bit of extra time - depending on the quality and segmentation size etc, there is a callback message from the MP3 converter that evaluates a user-defined segAction function when the conversion is finished. Thereby one can notify other programs when the MP3 file is ready to be used.

There is also a cycle parameter that controls how many MP3 segments to save before starting to overwrite earlier ones. This is needed to not totally flood the harddrive with MP3s.

The actual MP3 conversion is done using LAME and CNMAT's sendOSC is also needed for LAME-to-SC communication.

Attached is the recorder class plus a help file.

Attachments:
ISrecord060920.zip

Work with Mark: iStreet - OS X

2006-09-19 01:04 research

At UoW I also rewrote an old installation called The Intelligent Street. Mark d'Inverno and I had worked together on this one earlier and we now wanted it to run on a modern computer (OSX). We also wanted to redesign it a bit and make it into a standalone application.

The original Intelligent Street was a transnational sound installation were users could compose music using mobile phones and SMS. That, in turn, was an extended version of a yet older installation called 'The Street' by John Eacott. This totally reworked 'intelligent' version was premiered in November 2003 and realised as a joined effort between the Ambigence group (j.eacott, m.d'inverno, h.lörstad, f.rougier, f.olofsson) and the Sonic studio at the Interactive Institute in Piteå (way up in northern Sweden).

For the new OSX version, we dropped SMS as the only user interface and also removed the direct video+sound links between UK-SE that were part of the old setup. Except for that, the plan was to move it straight over to SuperCollider server (SC3) and rather spend time on polishing the overall sound and music.

I roughly estimated it would take just a few days to do the actual port. Most of the old code was written in SuperCollider version 2 (Mac OS9) and the generative music parts were done using SC2's patterns and Crucial libraries. So that code, I thought, would be pretty much forward compatible with our targeted SuperCollider 3. But sigh - it turned out that I had to rewrite it completely from scratch. The 'smart' tweaks and optimisations I had done in the old SC2 version in combination with the complexity of the engine made it necessary to redesign the thing bottom up. Even the generative patterns parts. Last I also dropped Crucial library for the synthesised instruments and did it all in bare-bone SC3.

But I guess it was worth the extra weeks of work. In the end, the system as a whole became more robust and better sounding. And standalone not to mention so hopefully it will survive a few years longer.

But I can also think of more creative work than rewriting old code. I've been doing that quite a lot recently. It feels like these old installations I've worked on earlier comes back to haunt me at regular intervals. And there is more and more of them for each year :-)

Proof: Intelligent Street running happily under OSX...

iStreet OSX


Work with Mark: Shadowplay

2006-09-18 23:16 research

Idea

Yet another system Mark d'Inverno and I worked on but never finished had the working title 'shadowplay'. We had this idea about an audiovisual installation where people's limbs (or outlines of bodies) would represent grid worlds. Agents would live in these worlds and evolve differently depending on things like limb size, limb movement over time, limb shape and limb position. The agents would make different music/sounds depending on the world they live in. A limb world could be thought of as a musical part in a score. The worlds would sound simultaneous but panned to different speakers to help interaction.

The visitors would see the outline of their bodies projected on a big screen together with the agents represented visually in this picture as tiny dots. Hopefully, people could then hear the agents that got caught or breed inside their own limbs. We hoped to active a very direct feeling of caressing and breeding your own sounding agents.

There were plans for multi-user interaction: if different limbs/outlines touched (e.g. users shaking hands), agents could migrate from one world to another. There they would inject new genes in the population, inflicting the sound, maybe die or take over totally. Though to keep agents within the worlds they were made to bounce off outlines. But one could shake off agents by moving quickly or just leave the area. These 'lost' agents would then starve to death if not adopted by other users.

Tech

The whole thing was written in Processing and SuperCollider. Processing did the video and graphics: getting the DV input stream, doing blob tracking (using the 3rd party library blob detection) and drawing the agents and lines for the limbs. SuperCollider handled rest: the sound synthesis, the genetics, agent state and behaviours, keeping track of the worlds etc. We used a slightly modified version of our A4 agent framework I wrote about in the following post: /f0blog/work-with-mark-bottom-up-approach/.

The two programs communicated via a network (OSC) and would ideally run on different machines.

I had major problems with programming. The math was hairy and all the features were very taxing on the CPU. We never got further than a rough implementation.

shadowplay screenshot 1 shadowplay screenshot 2 shadowplay screenshot 3


«first  …23 24 25 26 27 28 29 last»