«  …19 20 21 22 23 24 25 »

Absolut micro_noise2

2009-02-06 17:40 electronics

Today I built yet another micro_noise2 synths. This time inside a slightly strange bottle case. (thank you ap&bi)

absolut micro_noise2 photo1 absolut micro_noise2 photo2 absolut micro_noise2 photo3 absolut micro_noise2 photo4 absolut micro_noise2 photo5 absolut micro_noise2 photo6

Based on micro_noise by SGMK - www.mechatronicart.ch modified by /f0 090206

Also see micro_noise2, micro_noise_joy and micro_noise2 Batch.

Updates:


Audiovisuals with SC

2009-02-05 00:26 supercollider, visuals

Audiovisuals with SC

by Fredrik Olofsson

In this article, we will investigate the built-in graphical features of SuperCollider and how they can be used artistically, in combination with the sound synthesis server. Different techniques for audiovisual mapping are presented along with some more personal reflections on the relationship between sound and graphics. My hope is that this text, together with the code examples, will provide inspiration for composers to work simultaneously in both the aural and the visual domain.

1 Introduction

It is clear that presenting any kind of visual stimuli in relation to music severely affects how we perceive music. Often our experience is altered in more radical ways than we would like to acknowledge: "...one perception influences the other and transforms it. We never see the same thing when we also hear; we don’t hear the same thing when we see, as well" (Chion 1994, xxvi). Our goal as audiovisual composers should be to wisely utilize and direct these transformations. They become our material (Chion 1994; Collins and Olofsson 2006; Alexander and Collins 2007).

Successful audiovisual pieces have the audience believe that sound and graphics truly exist in the same context. This is what Michel Chion calls the audiovisual contract (Chion 1994). How can we establish these contracts in our own work?

As we will be coding both music and visuals, the most obvious way would be to share parameters between the sound and the graphics. So we do not just match sounds with visual events in time, but rather take the parameters that generate the graphics and let them also control aspects of the sound—or vice versa.

Overusing these direct mappings is easily done and we risk tiring our audience with too obvious a correlation over too long a time. This is what in film music is called Mickey Mousing; it happens when all visual events have sounds or musical phrases connected to them. Ball bouncing: boing boing, duck climbs a ladder: upward scale etc. As this becomes much more of a problem for longer pieces, we will need to find ways to vary our mappings over time. It is often enough to convincingly establish the correlation in the very beginning of the piece. Your audience’s trust in the relation will last for quite some time. So not all visual events need to have sounding representations, but the ones that do should be strong and trustworthy. And the same is true for sounding events in relation to visuals.

For multimedia pieces, there is also the problem of balance of attention. When we are dealing with such a perceptually powerful medium as visuals, we run the risk of having them over-power the music. We hear but simply forget to listen. How can we as audiovisual composers rather merge these two disparate domains instead of having one belittle the other?

In genres like film music, by comparison, a soundtrack that goes by unnoticed is often seen as a positive thing and something to strive for. So can we avoid composing subtle soundtracks for emotional manipulation, or avoid providing sounds just to fill a void in the narrative? This is all art in its own right, but we will prefer to focus here on the well balanced audiovisual work.

Given the normal predominance of the visual element, I find there is often a need to keep the graphics minimal and even slightly boring. Simplicity, regularity and consistency are all strategies that will help our minds from being distracted in the act of active listening. And we are very easily distracted. Just as loud unexpected sounds can be frightening, a sudden movement, an unexpected color or shape will most likely grab our attention. These foreign objects will kick-start our minds to seek explanations for why a particular thing pops up there at that time.

But this also means that our audience will actively come up with explanations and build narratives for things that were never meant to relate. They will want to see connections all the time by constantly looking for reasons—causes of the effects. We should strive to utilize this urge, feed it at the right times, play with and deceive it. We want to create the illusion of developments in the music being the logical cause of visual events.

So just as we are careful not to add sudden loud sounds in the mixing of a music track, I believe one should tread as carefully when presenting new shapes, colors, kinds of movements etc. Creating visuals could be seen analogously to composing music, where new sounds and themes are usually hinted at and prepared for in advance. Then these elements are developed, picked apart, recombined and used throughout the piece. Form, harmony, processes, theme and variations should be just as important concepts when composing with graphics.

But let us also not forget that contracts are made to be broken and that the visuals could and should provoke, jump out at you, be fun, annoying, wild and inspiring. Let them at times deliberately counteract the music. The effect of bringing it all back together will be all the more strong.

I hope the text here does not suggest a form of add-on graphics that just sit there and look beautiful. They can be so much more than just candy for the eyes. For me, audiovisuals are about the interplay of graphics and sound, and harmony is not always the most exciting option.

2 Graphics in SuperCollider

With the Pen class, SuperCollider provides simple two-dimensional graphics. Pen can only draw a few primitive shapes like lines, arcs, rectangles and ovals. These basic shapes can be stroked (outlined) or filled in different colors. As an example, the following code shows how to draw a red rectangle with a blue unfilled oval inside of it. The Rect class is required to specify coordinates and size for these objects.

(
Window().front.drawFunc= {
  Pen.fillColor= Color.red;  //set fill color
  Pen.fillRect(Rect(10, 20, 200, 100));  //10 pixels from left, 20 from top
  Pen.strokeColor= Color.blue;  //set stroke color
  Pen.strokeOval(Rect(20, 30, 180, 80));  //180 pixels wide, 80 high
};
)
figure 1

The line and lineTo methods help us in drawing custom shapes. Here the Point class is needed to specify line segments. This excerpt will draw a fat yellow triangle:

(
Window().front.drawFunc= {
  Pen.width= 8;  //set pencil width in pixles
  Pen.strokeColor= Color.yellow;  //set stroke color
  Pen.moveTo(Point(100, 100));  //go to start position
  Pen.lineTo(Point(150, 50));
  Pen.lineTo(Point(200, 100));
  Pen.lineTo(Point(100, 100));
  Pen.stroke;  //perform all collected drawing commands in one go
};
)
figure 2

Apart from drawing, Pen also lets you scale, transform (offset) and rotate the drawing area—but all in two dimensions only:

(
Window().front.drawFunc= {
  Pen.scale(0.5, 0.5);  //scale to half the size
  Pen.rotate(pi/4, 640/2, 480/2);  //rotate 45 degrees in a 640 by 480 window
  Pen.translate(100, 200);  //offset drawing 100 pixels from left, 200 from top
  Pen.fillRect(Rect(0, 0, 100, 100));
};
)

These transformations affect the drawing commands that follow and will help to position and animate your shapes. With the use method we can define a scope for transformations as well as color settings:

(
Window().front.drawFunc= {
  Pen.strokeColor= Color.red;
  Pen.use{  //remember state (push)
    5.do{|i|
      Pen.strokeColor= Color.grey(i/5);
      Pen.scale(0.75, 0.9);  //scale width and height
      Pen.strokeOval(Rect(20, 30, 180, 80));  //results in smaller ovals
    };
  };  //revert back to state (pop)
  Pen.strokeOval(Rect(20, 30, 180, 80));  //big oval (note same size)
};
)
figure 3

These are the basic features of the Pen class and they, as all drawing commands, must be performed within a certain window’s or user view’s redrawing routine (that is within a drawFunc function.).

A class with so few features can, of course, be quite limiting and frustrating to work with. For instance, you will have to combine several primitives to draw more complex shapes. But on the other hand, in combination with the outstanding flexibility of programming sounds in SuperCollider, Pen provides a unique improvisational way to explore and play with audiovisual mappings. Also, as it is so simple, it will force you to focus on the basic principles of shape, gesture and color.

Moreover, Pen’s restraints can have a positive effect on the outcome. You more or less have to do simple, minimal and straightforward graphics and program everything yourself. Your ideas will be shown in crystal clarity to your audience—for better or for worse, as there are no fancy and fluffy video effects to hide them behind.

3 Structure of the examples

It is recommended that you study the Pen, Color, Window and UserView help files alongside this text. The better knowledge you have of these classes, the easier it will be to modify and adapt the code provided to suit your needs.

All the examples referred to in this article use the same structure. First, we create a window and place a user view inside of it.

s.latency= 0.05;
s.waitForBoot{

  //--window setup
  var width= 500, height= 500;
  var w= Window("Example00 - structure", Rect(99, 99, width, height), false);
  var u= UserView(w, Rect(0, 0, width, height));

The reason we use a user view here instead of, as shown in various help files, draw directly into the window with a drawFunc function, is that user views provide a few additional features that we will need later. Most important it lets us control when and how to clear the drawing area. A window’s drawFunc will always erase previous drawings when the window is refreshed and sometimes you rather want to keep drawing new things on top of the current graphics or draw while slowly fading out what was previously there. A user view can do this.

After creating the window and user view, some more variables are defined. These will vary from example to example depending on what we will need to keep track of in the main loop below. Here we will set up things like counters, synths and responders.

  //--variables
  var theta= 0;  //will be used as a counter. no external access at runtime
  var syn= SynthDef(\av, {|freq= 400, amp= 0, pan= 0|
    var z= SinOsc.ar(0, BPF.ar(Pulse.ar(freq, amp)*2pi), amp);
    Out.ar(0, Pan2.ar(z, pan));
  }, #[0.05, 0.05, 0.05]).play(s);
  s.sync;

We then have some more settings in the form of environment variables. These will define things that we want to be able to change while the program is running. They will be our interface for the program and in most of the examples we will change these settings manually via the interpreter. But we could just as well control them with the help of the mouse, MIDI or OSC responders. Some later examples will show how to do that.

  //--interface
  ~speed= 0.025;  //it is possible to change these at runtime
  ~radius= 20;
  ~spreadx= 20;
  ~spready= 20;

Next in this general example structure comes the main loop. This function gets evaluated once each time the window is refreshed, and it is here that all of the actual drawing will take place.

  //--main loop
  u.drawFunc= {
    var x= sin(theta)*~spreadx;  //calculate coordinates
    var y= cos(x)*~spready;
    var a= x.hypot(y)/1.42/~spreadx;
    syn.set(  //update the synth with mapped parameters
      \freq, y.linexp(height.neg*0.5, height*0.5, 100, 1000),
      \amp, a.min(0.995),
      \pan, x.linlin(width.neg*0.5, width*0.5, -1, 1)
    );
    Pen.translate(width*0.5, height*0.5);  //offset all drawing to the middle
    Pen.fillColor= Color.red;  //set the fill color
    Pen.fillOval(Rect.aboutPoint(Point(x, y), ~radius*a, ~radius*a));
    theta= theta+~speed%2pi;  //our counter counts in radians
  };

In this case, we calculate positions and then use them to control a synth. Vertical window position is mapped to frequency (note the linear to exponential scaling), distance from window centre sets the amplitude and horizontal window position the panning.

Finally, there are lines of code that set the user view’s clear behavior, give the window background color and make the window visible. There is also a routine that drive the animation by redrawing the user view around 60 times a second. Without this animation routine, the drawFunc function would only be evaluated once.

  //--window management
  u.clearOnRefresh= true;  //erase view for each refresh
  u.background= Color.white;  //set background color
  w.onClose= {syn.free};  //stop the sound when window closed
  w.front;  //make the window appear
  CmdPeriod.doOnce({w.close});
  Routine({
    var nextTime;
    while({w.isClosed.not}, {
      nextTime= Main.elapsedTime+(1/60);
      u.refresh;
      (nextTime-Main.elapsedTime).max(0.001).wait;
    });
  }).play(AppClock);
};

So this will be the framework we work within. It should be pretty straight forward to follow and I believe it is flexible and general enough to serve you, dear reader, as a springboard for your own audiovisual experiments. Later in this article, we will add features that will make the examples look more complex, but this basic structure will remain the same.

4 One-to-one mappings

Direct cross-domain mappings are a fun and creative way to generate and control sounds and visuals. Before we look at more specialized examples, let us start with some simple tests and try to investigate, very roughly, which techniques could be used to attain a strong audiovisual correlation. There is some subjectivity involved, and I do not want to make any binding claims or lists of rules to follow.

Example01a - louder is bigger

In this first example, we map sound amplitude to object size. The code follows the structure outlined above and the only thing special would be the line var pat= Pn(Pshuf(#[0, 0, 0, 0, 0.1, 0.25, 0.5, 0.75, 1, 1], 8), inf).asStream; This creates an endless stream of amplitudes in the form of a pattern that reshuffles an array after eight repetitions. This is just to get some variation while still keeping to a fairly repetitive rhythm. Repetition will help us see the effect of our mapping technique more clearly.

With the program still running, try changing the environment variables at the bottom. Different settings let us explore the effect of mapping in different situations. This one-to-one mapping of loudness and size is a very strong one. It is hard to imagine a more direct example.

Example01b - louder is smaller

In this example, we use exactly the same code as in Example01a, but invert the relation of amplitude and size. So the louder the volume, the smaller the object. The only line added is the amp= 1-amp; which swaps this relation around and inverts the amplitude value just before drawing the oval.

Of course, there is a strong correlation between graphics and sound here as well, but it can feel stranger to watch. With a greater radius and slower frame rate (~fps, you will probably notice it even more. This way of mapping, although as direct and consistent as the previous version, does not feel nearly as 'natural'. How is that? We are so accustomed to the bigger-is-louder representation from the real world that it is hard to appreciate this backwards mapping in its own right.

Example02a - higher is bigger

Next, we try a new technique. We map the frequency of the sound to the size of the object in such a way that higher pitches will draw bigger ovals. The code is almost identical to the previous examples except for some minor alterations to the synth definition. Now it lets us set the frequency with the scaleFreq argument. I find this audiovisual mapping is also very strong and direct.

Example02b - higher is smaller

After that, we again invert the mapping of the previous example. One line differs and now lower pitches are drawn bigger. Interestingly enough, one could think that if physical laws were governing how 'natural' a mapping would be to us, then bigger objects would be more likely to sound with a lower pitch. It might be due to the construction of this particular example, but to me, it does not map across domains as well as Example02a.

Example03a - louder is brighter

Example03b - louder is darker

These two examples demonstrate the effect of connecting brightness to amplitude. We use a gray color for the window background to try to be a bit more neutral. The mapping used in these two examples are perhaps not so direct and obvious, but personally, I find that louder-is-brighter feels better than when louder means a darker color. Do not forget to test the different settings with the environment variables at the bottom.

Example04a - higher is higher

Example04b - higher is lower

Example05a - left is left

Example05b - left is right

Here two pairs of examples where we use the position on the screen as parameters. These are of course very useful parameters to play with and although we probably agree on them as basic and 'natural' principles we can follow, they can also easily be ruled out or temporarily lose meaning. Projecting on the floor or on the ceiling, for instance, will obviously make the so strong up-equals-higher-frequency assumption invalid. Perhaps it will work just as well with higher-frequency-equals-further-away, but often one has to invent the logic to accompany the display situation.

Example06a - louder is higher

Example06b - louder is lower

Another pair of examples of the same kind as 04 and 05. We are maybe slightly more forgiving with which direction sound amplitude is mapped to than with frequency and panning. But here louder and higher position on the screen maps well I think. This could very well be a standard we have just become used to from all the different applications using this metaphor.

Example07a - higher is faster

Example07b - higher is slower

Here is another kind of technique of mapping that is often overlooked. The connection between frequency and speed of movement, as demonstrated here, is an important and strong one.

Example08a - faster is faster

Example08b - faster is slower

For these next examples, we change the SynthDef a little to provide shorter pulses that we can play at any rate. It surprises me how well the faster-is-slower mapping works in this case. This is perhaps only due to the specifics of the example, and the reader might like to investigate the synchronization of the phase of the sound onsets versus the left-right visual position.

Example09a - brighter is sharper

Example09b - brighter is smoother

These examples investigate if sounds with brighter timbre match objects with sharper corners or smooth and less complex sounds fit better with rounded objects. Maybe we are very used to seeing waveforms plotted and cannot see the inverted version (Example09b) in its own right.

This code is a little bit more complex as we use a function to draw a star shape, varying the number and the size of arms. The arguments for the function are position, the number of arms, and outer and inner radius. We also use stroke here instead of fill to draw the segments of lines.

Example10a - voices are objects

Example10b - voices not objects

One obvious correlation is to let each visible object have its own unique sound or voice. Then, more objects mean more sound in total. As the second example shows, the opposite mapping is quite hard to appreciate.

In this code, we create an array of fifty synths and play them at the same time. The environment variable ~num decides how many of these will be assigned an amplitude above zero and be heard. CPU cost is thus constant rather than dynamically changing, but this technique allows simpler coding. We also give audible objects a unique frequency and panning position.

Example11a - harmonicity is order

Example11b - harmonicity is disorder

This last pair of example in this section map equally spaced graphical objects to the harmonic series. The less equally spaced the objects are, the further from the harmonic series their partial frequencies will be. In the inverted version this relationship is flipped and less visual order brings us closer to the overtone series.

There should be nothing difficult to understand in the code. The ~dist parameter is in percent and will decide how much each object deviates both graphically and from the harmonic series.

5 More mappings

In most of the examples above, I believe that the majority of us would agree on which mappings are the most effective. There seem to be some basic rules governing what can be considered good audiovisual mappings after all. Yet, there are many more possible relations to investigate. Out of these, I believe fewer and fewer of us will agree on how well they work. Pitch in relation to color is a difficult area that appears more subjective (maybe because colored hearing is a form of synesthesia; see Alexander and Collins 2007.)

Example12 tries to combine many of the above mappings in a single program. We map amplitude to size, brightness and screen position. Frequency and panning are dependent on screen position and the speed of the object and the more voices you add (~num) the more graphical objects will appear. Example13 is another little program with multiple parameters mapped across the domains.

With as many one-to-one parameter mappings as in these examples, the result will obviously not be as many times more effective in terms of audiovisual correlation. The effect seems to average out after the first few parameters. In both examples, notice how hard it becomes to follow individual objects when there are lots of things happening at the same time. A great number of objects might even lessen the correlation effect of direct mapping in total.

It is my impression that it is preferable to use a few clear parameter relations that instead are varied over time. A smaller number of visible objects with fewer mappings will result in a stronger impact for those mappings that are present. So rather than trying to relate everything in the music to something showing on the screen, pick the most prominent feature of the music (possibly by ear), find a mapping for it and let that be the only thing visible (Collins and Olofsson 2006).

To keep an audience’s interest with fewer parameters mapped (and avoid ‘Mickey Mousing’), we can let our one-to-one mappings change during the piece. These transitions could be important parameters for defining the form or to play with when designing interactive installations. For example, imagine an installation with a deliberately odd correlation between the audio and the visuals. As people start to engage more with it, when they start to collaborate and try out new things, the mapping could, as a reward, become more ‘natural’ and direct. This is a subtle but effective way to keep people engaged. They will feel that they gain more control as the system changes.

6 Systems

One common method among audiovisual works is to set up some form of system and let that drive the sound and the graphics. Simple models of physical laws like gravity could be part of such a setup and Example14 presents a basic particle system with gravity and damping. Systems such as this are often implemented just as audiovisual pieces, as seeing the effect of (say) gravity is something quite different from only hearing it. The eyes will guide the ears. Also, these constructions lead us to control our sounds in ways that may otherwise be harder to conceive.

For Example14, click and drag with the mouse to create new balls. The balls will bounce around for a little while and then slowly disappear. The Point class is used as a vector to describes direction and velocity and there is also a function that returns a dictionary for every ball created. Each dictionary stores the unique settings for a given ball.

Example15 is an implementation of a simple but beautiful system John Whitney describes in his book Digital Harmony (Whitney 1980). Whitney was a pioneer in early computer graphics as well as in experimental film. In his system, the second ball will rotate at half the speed of the first, the third with half the speed of the second etc. The outermost ball will take many minutes to complete a cycle. Each ball has a unique frequency that is heard only once per lap. Complex patterns arise as the string of balls twists and unfold.

7 Audio analysis

So far we have generated the sounds together with the graphics. Now we look into analysis of the audio signal and how that can be used to drive visuals. Example16 sets up a number of peak filters spread out between 200 and 6000 Hertz. The amplitude of each filter output is tracked and sent back to the program with the help of a SendTrig and OSCFunc pair. This is a very common technique used in dedicated real-time video programs to extract data from the music, though typically it only analyses the musical surface and not component events. There are also always problems with latency and inaccurate frequency matching. If you generate the music yourself and know the synthesis parameters, for instance using patterns, then it is much better to simply map that data directly and not have to deal with analysis at all.

Another option is to use FFT and draw sonograms. Example17 shows one way that this can be accomplished. The version with rotation uses a little trick to manually clear the drawing area. If we draw a rectangle covering the whole area and fill it in a semi-transparent color we get a nice trail effect:

Pen.fillColor= Color.grey(1, ~trails);
Pen.fillRect(Rect(0, 0, width, height));

As the ~trails variable approaches zero, the rectangle will fade out the previous frame slower and slower.

Example18 uses yet another technique. It just draws the raw waveform from the source sound. With some simple rotation and trail effects, the simple graphics can get quite interesting.

figure 4

8 Audiovisual instruments

Example19 shows how to set up MIDI control to a combined audiovisual instrument. I tend to play differently when seeing graphics like this and I know I start to prefer sounds and sequences that also look good. This process could be thought of as a form of forced synesthesia.

You might need to edit the settings for the MIDIIn.control function to match your particular MIDI device. By default, it expects MIDI controller numbers 1 to 7.

9 Presenting your work

After finishing your piece you probably want to present it in some way. For projecting with a video projector, I would recommend that you place a black picture as a desktop picture on your external monitor (if your computer supports it). Then create a borderless non-zoomable window and place it somewhere in the centre. This technique will let you keep your drawing’s dimensions and it will look the same with the projector set to different resolutions.

There is a fullscreen feature in for SuperCollider windows, but note that it might resize your drawings to match the current screen dimensions and this is often something you want to control yourself. CPU usage will also go up as drawing in bigger windows requires more work.

Consider projecting onto other surfaces than the standard rectangular white screen. I myself have had good results with different materials such as transparent mosquito nets, huge machinery wrapped in paper, hanging naked bodies etc. With the clip method you can define a mask for your visuals and have the drawing only happen within those bounds (see Example20):

Pen.moveTo(Point(200, 200));  //move to start position
Pen.lineTo(Point(300, 100));  //define a mask (here triangular)
Pen.lineTo(Point(400, 200));
Pen.lineTo(Point(200, 200));
Pen.clip;  //set the mask for drawing commands that follows
//continue drawing here
//commands will automatically be clipped outside the triangle

Be prepared to also tune your colors if projecting onto a non-white surface as they will become tinted.

In SuperCollider it can be a bit tricky to render your work and save it as a movie file. You could record directly onto a digital video (DV) camera that has analogue video input (s-video or composite). The camera will also record your sound in sync and not tax the CPU of the computer. The drawback is that it may result in far from perfect quality. Another (emergency only) option is to film the computer screen with a camera. With LCD screen the result is not so bad as one would expect but still far from optimal.

A better option is to capture an area of the screen in real-time with screengrab/cast programs. For Mac OS X there is iShowU and Snapz Pro, for Linux Demorecorder and for Windows Taksi and Fraps. These programs are now efficient and can record both high-quality video and sound in realtime without taking too much of the computer’s CPU power.

For very high resolution and CPU demanding visuals, you might consider writing your own rendering engine. Edit your program to not draw anything, just collect all the drawing commands in an array without actually performing the drawing. The audio commands can be separately rendered to an audio file using the NRT mode if necessary. For the visuals, you play back the drawing commands at a very slow framerate. Use one of the screen recording applications mentioned above to create a high-quality movie (or, if you use Mac OS X, SCImage to write single image files). Finally, you combine the movie and the previously recorded sound file in i.e., QuickTime Pro. A marker in the form of a single white frame and an audio impulse might be needed to get the sync back (or you could timestamp your drawing commands).

But the best option of all is, of course, to distribute your work as open-source code. With no loss in quality, your audience can study and learn from your work and, most importantly, you can use generative techniques for variations. The piece can be endless and surprising even for you as its creator. This is something you obviously lose when recording your work into a fixed medium.

For museums and galleries possibly build a dedicated and standalone version of SuperCollider that will automatically start your piece at startup. See the help files on using the startup file and creating standalone applications.

10 Other options for graphics

If you want to extend beyond two-dimensional graphics, use OpenGL or realtime video, then the Pen class will not be sufficient. Also, note that Pen is not the best choice if you plan to animate hundreds of objects. It does not perform as well as specialized graphical environments. Good options include MaxMSP/Jitter, Pd/Gem, Processing, LuaAV and more. Communication with these programs is simply effected using Open Sound Control.

There is also ScGraph (a graphical server written by Florian Schmidt), SCQuartzComposerView and SCImage (the last two are built into SuperCollider Mac OS X, see respective help file). SCQuartzComposerView lets you play and control Quartz Composer compositions within a SuperCollider window. SCImage adds advanced image processing and bitmap operation features to SuperCollider via the CoreImage framework. It works well together with Pen and lets you, among other things, write your Pen graphics to disk as single image files (tiff, bmp, jpeg etc.).

11 Ending

Please consider this text and its humble investigation of audiovisual mappings as a starting point for your own experiments. There is much more to explore in the relation of graphics and sound, and lots of room for personal interpretation. The examples here only present a limited set of techniques, but there are parts that I hope can be reused and built upon further. Take them apart, break them and remix the code! In particular, I think the area of personalized combined audiovisual instruments is very interesting; designing and playing these systems completely changes my concept of what visualization and sonification of processes, systems and music can mean.

12 References

Alexander, A., and Collins, N. 2007. Live Audiovisual Performance. In Collins, N., and d'Escrivan, J., eds. The Cambridge Companion to Electronic Music. Cambridge: Cambridge University Press.

Chion, M. 1994. Audio-Vision: Sound on Screen. New York: Columbia University Press. Original published 1990, translated by Gorbman, C.

Collins, N., and Olofsson, F. 2006. klipp av: Live Algorithmic Splicing and Audiovisual Event Capture. Computer Music Journal 30(2): 8-18

Whitney, J. 1980. Digital Harmony: On the Complementarity of Music and Visual Art. Peterborough, N.H.: Byte Books/McGraw-Hill.

Updates:

Attachments:
audiovisuals_with_sc-examples35.zip
audiovisuals_with_sc-examples39.zip

f0blog Under the Hood Updates

2008-12-11 22:26 other

Now using Drupal 6.8. (prev 5.7). No stable audio module for 6.x just yet, so all my MP3s are hidden atm. In a few days, I hope.

Updates:


Why Share?

2008-12-04 23:27 other

We all know how fantastic the open-source movement is. How wonderful it is with all these people that distribute their code, schematics, data, ideas etc. for free and in such a spirit of openness. We gain so much from it and it is all really great.

But seen from the contributor's point of view, one could ask the questions: why share your hard-earned knowledge? What are the benefits and why spend a lot of time helping unknown people - often without even thanks in return? Why give away code that you care about and that have taken lots of effort and hours to write - for free? Is it for personal fame? Is it the communal spirit or some political beliefs? Or the lack of interested commercial companies?

My personal and perhaps slightly provocative answer is that I share because of egoism/self-interest. I found that by making something public, I force myself to write more reusable and understandable code. Publicising means I will proof-read, add comments and help-files and perhaps cross-platform support. Sharing also makes me reluctant to drastic change and help fixate things like interface, protocol and functionality. So after uploading code, I feel responsible to maintain it for the future throughout system and program upgrades - whether other people depend on it or not. It is the knowledge that someone _might be using it that is enough to put in that little extra effort and spend a few additional hours.

So for me as an independent software developer/artist, open-source is mainly a vehicle for caring about my own work. And it is the simple act of making it open and public that is so extremely helpful for me.

Of course, this is not the only reason. It is a great pleasure to see code I have written is helpful in other people's work, get feedback from users and see my ideas being developed a lot further by other artists. I also enjoy helping out where ever I can, passing on knowledge from people I, in turn, learned from. And being a frequent contributor in various communities do generate paid work in the form of workshops, concerts, programming jobs and technical support.

But again - the main reason I share is a selfish, practical and simple one: I write better code because I distribute it.


IAMAS Bye bye

2008-12-02 11:04 air-japan

Late autumn and my six months are over. They passed so incredibly fast I cannot believe it. But I guess that's a sign of me having had a brilliant and productive time. Thanks to staff and students. I'll miss Ogaki.

photo of IAMAS in Ogaki

Tokyo 2

2008-11-11 09:37 air-japan

After my two performances at Make: Tokyo Meeting 02, I had some time to retrace my steps in this metropolis. visiting the same places, taking the same pictures.

Now and then...

photo tokyo now photo tokyo then

Now and then...

photo tokyo now photo tokyo then

But I also made some new acquaintances: plastic food town / Kappabashi street...

photo tokyo kappabashi 1 photo tokyo kappabashi 2

the iceman...

photo tokyo ice man

window cleaners...

photo tokyo window cleaners 1 photo tokyo window cleaners 2

SuperCollider Socks

2008-11-11 05:02 supercollider

A fantastic present from tn8. Thank you so much.

Quarks.install("redSys");
//and then recompile
s.boot;
{RedFrik.ar(2008)}.play  //noise socks!
SuperCollider socks photo1 SuperCollider socks photo2 SuperCollider socks photo3 SuperCollider socks photo4

Yamaguchi

2008-11-04 13:55 air-japan

One peaceful place. Specially Sesshu's beautiful zen-garden of Joueiji Temple.

photo yamaguchi 1 photo yamaguchi 2 photo yamaguchi 3 photo yamaguchi 4

'a place of sonic beauty'

photo yamaguchi 5

Also, Yamaguchi is a city of hot springs (onsen) and new media art (YCAM). A very good combination.

photo yamaguchi 6

«  …19 20 21 22 23 24 25 »