Here are my best-of twitter tweets so far. See twitter.com/redFrik for the rest. With the hard limitation of 140 characters, it is really challenging to write a piece of SuperCollider code that fits and sounds good _and is interesting to listen to for more than a few seconds.
99 sine oscillators played one after the other with a two seconds interval. Each oscillator lasts for 27 seconds. So the total duration is 99 * 2 + 27 = 225 seconds. the oscillators are phase modulated with other sine oscillators with frequencies repeating in the number series: 500, 501, 502, 603, 604, 605, 706, 707, 708. The base frequencies of the 99 carrier oscillators slowly raise by one Hertz from 1 to 99. The only random thing is the stereo panning position for each oscillator.
A clip noise generator runs through a Moog filter and then a reverb. Every third second there is a new noise added and each noise lasts for 22 seconds. Each Moog filter has two unique parabolic LFOs that runs at a random rate between 0 and 0.3 Hertz - each in one channel. At most, there is 8 number of overlapping noises playing at the same time. As there are so many reverbs playing here at once one needs to allocate more memory to SC server. Something like this... Server.local.options.memSize= 32768; and then reboot the localhost server.
This one is completely deterministic although it is hard to believe when hearing it. A lot of nested feedbacking sine oscillators. Nine parallel oscillators are mixed down to 1 and duplicated in left and right channels. Each of the nine oscillators is frequency and feedback modulated but have a static amplitude of 1/9. The frequency modulator consists of yet more modulated feedbacking sine oscillators, while the feedback of the outer is only modulated with a single feedbacking sine oscillator running at 0.1 Hertz. All in all, there are 109 unit generators in total running in this tweet and it peaks at about 9.3% of my computer's CPU.
These four all work the same way. They only differ in buffer size and what kind of oscillator is used for reading back samples from the buffer. There is not much progress over longer time but they do have character and subtle though deterministic variation in the details.
This tweet is also totally deterministic and without any randomness. Here a lot of nested square wave oscillators creates the complexity. Basically, there are 4 channels/voices mixed down to one and then duplicated in left and right channel. There are three levels deep nesting of frequency modulation with another set of square waves mixed and added.
Binary numbers from 0 to 255 form the rhythmic patterns in this tweet. Each number first repeats 8 times - each time all the bits are shifted one position to the left. That creates an array of 64 ones and zeros. This array is then repeated four times. This is what makes it sound like a 4 x 4/4 bar theme (i.e. 4 bars per number). The total 64 * 4 * 256 binary digits are played in sequence and each digit lasts for 1/50th of a second. The total duration becomes 64 * 4 * 256 / 50 = 1310.72 seconds. The sound is generated by the ones and the zeros directly. There is an exponential decay for these flipflop pulses that slowly increases throughout the 256 numbers. It starts with a decay time of 0.008 seconds and ends with 2.04 seconds. In the MP3 below only the numbers 0 - 31 are played.
Another tweet without any randomness. There are five parallel routines and all do something 500 times. What they do is to define or redefine a node proxy. There are 25 proxies in total and each one contains a sine shaped oscillator panned to one out of four positions in the stereo field. The frequencies are climbing upwards in a slightly jagged curve. The exact length is a bit complicated to calculate but it is around 575 seconds. In the end, the proxies do not stop to play but just keeps the last assigned frequency and the whole soundscape becomes static.
Here are two SuperCollider documents made years ago to resemble two MaxMSP patches by Katsuhiro Chiba. His patches are wonderfully constructed with playful and nice GUI design plus they sounded good to me. So I was curious how he made them and to learn more I ported parts of his code to SuperCollider. The Max4 patches are still available here along with some screenshots.
All credit for the attached code should go to Mr Chiba.
First I was assigned to play last and would thereby have had the honour/responsibility of waking everybody up for breakfast at 7:00 in the morning. Both fun and daunting. How to let your audience sleep in as long as possible and then wake them up gently but firmly?
I assumed most people would be asleep when it was my turn and that I also would be very tired. So I prepared my solo set (using redThermoKontroll) to be extremely simple and static in nature. Hardly any variation - only a couple of sounds sources (4 SuperCollider patches I think it was) with looong transitions between settings that I could control manually.
Then there was the waking up part and for that, I wrote the very simple sounding code below. It starts slowly with one little alarm going off. Then a second alarm, then more and more until complete chaos. I made it so that this patch could be mixed in with my drones and with separate control over volume.
Unfortunately, when I got to Stockholm they had changed the playing order and I did do my performance around midnight instead. And I never set the alarms off.
To run the attached files you will need to install my RedGA library from the /code/sc/#classes page.
First, is a little exploration tool called growing_sounds10. It makes a single SynthDef from the settings of a multi slider. Draw something and press space to generate and play the SynthDef. If you like what you hear you can copy the code from the post window and refine it manually.
Also, try the preset functions. They will either generate, modify or set a fixed setting/drawing in the multi-slider.
the SynthDef generated in the screenshot above was this one...
Another file is called growing_soundsBreed and it gives you control over 6 parent genomes that can be transformed into SynthDefs and listened to by clicking the play buttons. Mark good sounding parents and breed a new generation. The genomes should now have been mixed and mutated and there are 6 new children as a result of the operation. It is likely that they sound similar to their parents and the longer you repeat the process, the more similar the genomes and in turn, the SynthDefs (phenomes) will be.
Yet another piece of code is growing_soundsBreedFitness and it works the same way as the previous except that you here give a rating i.e. fitness to each parent (blue sliders = fitness amount). So instead of marking which parents you give ratings. The system will use these ratings as a guideline of importance when choosing which parents' genes to use for the new generation.
And last is a file called growing_soundsBreedPattern in which you can not only breed SynthDefs but also the amplitude pattern they play in a simple sequencer.
As always, all code published under GNU GPL v2 license.
180101: changed file format from rtf to scd and confirmed it working under SC 3.9
This birdcall synthesis tutorial by Andy Farnell I found very good when it was published. And as I wanted to learn more by synthetic bird songs I ported the Pure Data patches. So here is a version of hansm-bird-control.pd for SuperCollider.