why share?

We all know how fantastic the open-source movement is. How wonderful it is with all these people that distribute their code, schematics, data, ideas etc. for free and in such a spirit of openness. We gain so much from it and it is all really great.

But seen from the contributor's point of view, one could ask the questions: why share your hard-earned knowledge? What are the benefits and why spend a lot of time helping unknown people - often without even a thanks in return? Why give away code that you care about and that have taken lots of effort and hours to write - for free? Is it for personal fame? Is it the communal spirit or some political beliefs? Or the lack of interested commercial companies?

My personal and perhaps slightly provocative answer is that I share because of egoism / self-interest. I found that by making something public, I force myself to write more reusable and understandable code. Publicising means I will proof-read, add comments and help-files and perhaps cross platform support. Sharing also makes me reluctant to drastic change and help fixate things like interface, protocol and functionality. So after uploading code, I feel responsible to maintain it for the future throughout system and program upgrades - whether other people depend on it or not. It is the knowledge that someone _might be using it that is enough to put in that little extra effort and spend a few additional hours.
So for me as an independent software developer / artist, open-source is mainly a vehicle for caring about my own work. And it is the simple act of making it open and public that is so extremely helpful for me.

Of course this is not the only reason. It is a great pleasure to see code I have written being helpful in other people's work, get feedback from users and see my ideas being developed a lot further by other artists. I also enjoy helping out where ever I can, passing on knowledge from people I in turn learned from. And being a frequent contributor in various communities do generate paid work in the form of workshops, concerts, programming jobs and technical support.
But again - the main reason I share is a selfish, practical and simple one: I write better code because I distribute it.

sc multiple apps

during a recent gig i suddenly felt a need to crossfade between two songs/patches (strange as it seems). as i'd coded my music using RedMst and it in turn makes heavy use of class methods, starting up a new song would disrupt the current music playing (tempo, current index, etc). i also had some master effects running on bus 0 and 1 (ReplaceOut) that belonged to a particular song and i didn't want those for the new one.

so how to isolate patches? i got the idea of running two copies of the supercollider application at the same time. they'd be identical, use the same soundcard and share class library. having two apps running would also be good for safety reasons. if one crashed i could keep on playing with the other. it turned out to be quite easy. the trick is to use the internal server and find a way to visually tell the two applications apart.
dan stowell helped me by implementing thisProcess.pid and i got some great tips from cylob. he used a similar setup for his live performances. here's my take on it... (you'll need a recent version of supercollider (>30oct '08)

1. put the code below into your startup.rtf file. it'll make the secondary app green and shift windows to the right - this only so one can tell them apart.
2. go to supercollider application folder and duplicate the program file (cmd+d). rename the copy "SuperCollider (green)". remember to duplicate again when you've updated your main sc app.
3. important: from now on use internal as the default server in your code. localhost will crosstalk when running multiple apps.

so with a simple duplicate command and minimal change to sc itself (just for cosmetic difference), i can now crossfade my songs.

//put this in startup.rtf...

//--detect secondary application and colour it green
var p, l;
~green= false;
p= Pipe("ps -xw | grep 'SuperCollider (green)' | grep -v grep | awk '{print $1}'", "r");
l= p.getLine;
if(l.notNil and:{l.asInteger==thisProcess.pid}, {~green= true});

if(~green.not, {        //for main app (red)
        GUI.skins.put(\redFrik, (
                background: Color.red.alpha_(0.8),
                foreground: Color.black,
                selection: Color.grey,
                unfocus: 0.9,
                fontSpecs: ["Monaco", 9],
                offset: Point(0, 0)
}, {                    //for secondary app (green)
        GUI.skins.put(\redFrik, (
                background: Color.green.alpha_(0.8),
                foreground: Color.black,
                selection: Color.grey,
                unfocus: 0.9,
                fontSpecs: ["Monaco", 9],
                offset: Point(455, 0)
        Document.listener.background_(GUI.skins.redFrik.background.blend(Color.grey(0.9), 0.9));
        Document.initAction_{|doc| doc.background_(GUI.skins.redFrik.background.blend(Color.grey(0.9), 0.9))};
                x.window.bounds= x.window.bounds.moveBy(GUI.skins.redFrik.offset.x, 0);
                x.window.view.background_(GUI.skins.redFrik.background.blend(Color.grey(0.9), 0.9));

Document.listener.bounds_(Rect(GUI.skins.redFrik.offset.x, GUI.window.screenBounds.height-580, 450, 580));


RedMst, RedTrk, RedTrk2 and RedSeq are a set of supercollider classes that i now finally cleaned up and released. i wrote them about a year ago and have been using them for live performances since then. they function as sort of a timeline and the basic idea is to sequence code in a very simple way.
the master keeps track of when some tracks should play and then switches them on/off with .play and .stop at the right time (next quant beat, default= 4). a track can be any code that responds to these two messages (pbind, bbcut2, pdef, routines, etc).

redMst is distributed as a quark. see Quarks helpfile for more info on how to install.


redUniform - wireless hardware

i have constructed a wireless sensor system for my performance costume. it has 2 accelerometers (3-axis adxl330, sparkfun - same as in the wii remote i believe) and 4 switches (digital). both master and slave uses a microcontroller (atmega8, atmel) and a transceiver (mirf v2, sparkfun).
below is a short video, some pictures, supercollider classes and also the schematics and the complete firmware if you want to build the thing yourself.

redUniform-hardwareWireless from redFrik on Vimeo.

the avr programmer (usbtinyisp kit v2.0) and the electronic parts needed...

the master and slave...

sensors sewn in...

081008 update: slave firmware and slave schematics updated - added 2 more switches (pd6 and pd7)
081010 update: also attached the supercollider class RedUniform.sc with helpfile

supercollider study group (scsg) in japan

tn8 and i started weekly meetings for people interested in interactive sound programming using supercollider. here is our wiki with example code and discussion forum. much of it is in japanese but the example files should have comments in english.
we meet every friday at 17.00 for a ~2h session. 2nd floor, new building, iamas, ogaki. open for everyone.

MaxPat - max patch parser, converter, manipulator, generator

i wrote a max5 to max4 patch converter in supercollider. the reason was to get more people to install sc. so this is software with a hidden agenda.
download it here. see the readme for installation instructions and info on how the conversion works.
it can also be used to generate, examine, manipulate and process max5 patches in different ways. below are some screenshots of patches from the max-examples folder that i manipulated using this class. they might look strange but are fully functional. i just did things like add curly segments to all the patchlines.

red-framework published on googlecode

in june i cleaned up and released my red-framework for managing max/jitter patches. it is hosted here and you can get it via anonymous svn checkout.

the framework is for stacking, chaining and mixing max/jitter patches and shows my way of organising patches. i've been working on/with it since 2006 and it now contains >100 modules. it can handle jitter, control data, midi and also softvns video under max4.5.

welcome to join the project if you are interested. it is easy to write your own modules.


(for osx 10.4 and earlier you'll first need to install svn separately)
in the terminal type:

svn checkout http://red-framework.googlecode.com/svn/trunk/ red-framework-read-only

then press (p) to accept permanently.
last add the red-framework folder to max's file preferences.

it is licensed under gnu gpl v2 and requires max5+jitter for osx. it has not been tested on windows xp yet but should run.


modules: generators, modulators, outputs
faders: cross, gain, etc.
slots = module+fader
chain = slots in series
stack = slots in parallel
mixer = go from parallel to serial

a max/jitter patch following a simple standard
it must have 2 inlets: in, ctrl
and 2 outlets: out, info
the module can be generator, modulator or output

a slot is a fader + a module
slots also have 2 inlets: in, ctrl
and 2 outlets: out, info

builds a stack of slots - serial in and parallel out

builds a chain of slots - serial in and serial out

a mixer of slots - parallel in and serial out

pros and cons...

why use red-framework?
same for jitter, midi, controldata, softvns
reusable patches
generalised and efficient

i have made various bigger performance patches using red-framework
special gui/bpatchers for stacks, chains, mixers

only discrete events - no msp
no opengl or shaders
too complicated to perform with
went back to my old os9 patch
eg. learning the effect chain - not re-ordering!


Subscribe to f0blog RSS