Using Processing to control Sonic Pi (updated)

UPDATE ADDED FOR SONIC PI 3.0 AND LATER

line in the Sonic Pi program

nv=sync "/notesend"

is altered to

nv=sync "/osc/notesend"

Having been inspired by the superb Christmas Card from MeHackit (do look at it and download and play with the code) I resolved to take another look at Processing and how it can be used to control Sonic Pi. Previously I had only used it to run a conversion script to convert MusicXML files to Sonic Pi format, but seeing this Christmas Card showed that it is capable of far more. As a newbie to using the program in earnest I looked at some of the numerous examples at https://processing.org/examples/ and saw how easy it was to get information on mouse coordinates. I choose the example constrain which has a filled ellipse follow the mouse coordinates, but bounded by an enclosing box and decided to modifiy this to send coordinates to Sonic Pi using the sync command osc features added in Sonic PI 2.11. With reference to the MeHackit code, it was easy to add OSC commands to send the mouse x and xy coordinates to sonic pi, where they could be received and scaled to control the note pitch and cutoff values for notes played with the tb303 synth. This is just an example. You could control any parameters you wish, or add detection for mouse down as well in a more complex example. The processing script used is:

import oscP5.*; //libraries required
import netP5.*;

OscP5 oscP5;
NetAddress sonicPi;

float mx;
float my;
float easing = 1; //change to 1 to get immediate following
int radius = 24;
int edge = 100;
int inner = edge + radius;

void setup() {
size(640, 360);
noStroke();
ellipseMode(RADIUS);
rectMode(CORNERS);
oscP5 = new OscP5(this, 8000);
sonicPi = new NetAddress("127.0.0.1",4559);

}
void sendOscNote(float mx,float my) {
OscMessage toSend = new OscMessage("/notesend");
toSend.add(mx); //add mx and my values as floating numbers
toSend.add(my);
oscP5.send(toSend, sonicPi);
println(toSend);
}
void draw() {
background(51);

if (abs(mouseX - mx) > 0.1) {
mx = mx + (mouseX - mx) * easing;
}
if (abs(mouseY - my) > 0.1) {
my = my + (mouseY- my) * easing;
}

mx = constrain(mx, inner, width - inner);
my = constrain(my, inner, height - inner);
fill(76);
rect(edge, edge, width-edge, height-edge);
fill(255);
ellipse(mx, my, radius, radius);
sendOscNote(mx,my); //send the mx and my values to SP
}

To use it, install processing 3 from https://processing.org/ and paste the script into the sketch window which opens when you run it. Save the sketcch with a suitable name and location. YOu need to add the oscP5 library from the sketch=>import libarary…=>add library menu selection. YOu can then run the sketch and the ellispe should follow the mouse around inside its rectangle.

On the Sonic Pi side paste in the code below and run it.

use_synth :tb303
live_loop :os do
  nv=sync "/osc/notesend" #for Sonic PI 3 and later
  #puts nv #uncomment and comment next line to see OSC input
  #scale the mx and my values in nv[0] and nv[1] appropriately
  #raw mx 124(left)-516(right) and raw my 124 (top)-236(bottom
  puts (40+(nv[0]-124)/392*60).to_i.to_s+" "+(190-nv[1]/2).to_s
  play (40+(nv[0]-124)/392*60).to_i,cutoff: (190-nv[1]/2),sustain: 0.04,release: 0.01
  sleep 0.05
end

Run the SP program which will wait for input from teh Procdessing script. Run the Processing script and move the mouse around to alter the position of the filled ellipse (circle). Moveing it left-right will alter the pitch of the note from 40 to 100. Moving it up and down will alter the cutoff from 72 (bottom) to 128(top). Note it is possible to slice the note by moving the mouse bottom right where the pitch 100 is significantly above the cutoff value 72 so you hear nothing.

I hope that this simple example will inspre both you and I to explore further the use of Processing with Sonic Pi.

Here is a link to a video of the files in action

Advertisements

Sonic Pi plays Mozart’s Requiem in D Minor

mozart-small

Ever since the very early days of using Sonic Pi version 1, I had aspirations to see just how well the program might be used to give authentic renditions of pieces of Classical Music. Over the three and half years that have intervened, this has slowly become more achievable, both as the program has matured and had extra features added, and as my own experience of using various techniques with the program has matured. This last year, and indeed last six months has perhaps been the most exciting with improvements and new techniques allowing me to attempt the rendition of large scale musical works using Sonic Pi. You may well ask why do it in the first place, as other programs such as Sibelius, Logic Pro, Ableton and MuseScore2 can already go much further, and give more accurate renditions. I suppose it is because of the challenge, and because few others have explored this in depth with Sonic Pi, and I get a real buzz from listening to a piece, for which I have created the code (if not the music itself) for Sonic Pi to play. This is specially so when the music can usually be played on a Raspberry Pi, as well as its more powerful Apple or Windows big brothers, using Sonic Pi. I also have enjoyed preparing baroque music and earlier genres in particular. There is something rather cool about playing a piece purporting to be written by Henry Eighth on Sonic Pi, and of course, there are not many copyright problems in doing so!

The basic thing that Sonic Pi does is to play a note, of a specified pitch, for a specified duration, and with a specified timbre. Everything else has to be built up from there. Subtleties can be added by adjusting the envelope of the sound: how it is attacked, and how it may decay away. Sonic Pi also allows various effects to be applied, the most useful of which for Classical Music purposes is reverberation, which can give a feeling of depth and spaciousness. However, it is daunting to see how to tackle a large scale work with hundreds if not thousands of notes, and perhaps, for a reasonable sized orchestra, 20 different parts, played by a variety of instruments. Initially I tried smaller works, with fewer, perhaps two – four instruments, and of shorter duration. By choosing works carefully, many instruments could be reasonably rendered by means of the built in synths, particularly :tri and :beep and :saw, and later, by the welcome addition of :piano. However on the timbre side the thing that really allows you to get a more realistic rendition of classical instruments is to use sample based instruments, As I have discussed in previous articles on this site you can get a very reasonable rendition of some instruments simple by using a single sample, and playing it at different rates to achieve different pitches. Using the built in :ambi_glass_rub sample which plays a :fs5 note, give a pleasant sounding instrument, when played over a variety of rates, and using the rpitch: option it is fairly easy to play any pitch you want. Things get even better when you look at some of the collections of samples that have been made of orchestra instruments, some of which are free to use. I chose to utilise the Sonatina Symphonic Orchestra samples, (although there are now potentially better sets of samples), because it was moderately easy to develop one or two defined function in Sonic Pi to let you play notes using the range of instruments it covers, just in the same way that you would use a built in synth. Over the last couple of years I have refined  these functions so that they could handle chord playing as well, and also dealt with the problems that can arise in loading large numbers of samples, and I now use these routinely when developing Classical Music for Sonic Pi. For the future it would be good to develop the ability to use .sfz file formats directly to play easily any sample based instrument defined in that way. What I have done for the Sonatina library is to work specifically with the samples which it incorporates, but side stepping the .sfz files it also defines.

The second major hold up to tacking large scale pieces is the sheer number of notes which need to be entered into the Sonic Pi code. This also quickly brings you up against the size limitation for the length of code possible in a single Sonic Pi buffer. Both of these problems have been reasonably solved in the last six months. The size limitation is solved by means of the new run_file command introduced in Sonic Pi 2.11 Instead of running code from a Sonic Pi buffer, it lets you run it directly from an external text file without the length limitations. The number of notes problem, is made easier by means of a great script I found, written by Japanese Sonic Pi user Hiroshi Tachibana, using the program “Processing” The site was in Japanese but I managed to work out how to use it, and made some slight adjustments to it. It also got me using the free program MuseScore 2, which is a music notation program, which can both enter and display musical scores, but also play them using midi, upon which it is based. It has the advantage that it is easy to edit the music, and to create individual parts from a score. Using the program it is possible to take a complex musical part, for example a piano part, and to split it up into several simpler parts which Sonic Pi can play, and which, when played together sound just like the original complex part. Thus the Kyrie from the Mozart Requiem which has 18 scored parts, in fact requires 26 separate parts to play in Sonic PI. Using MuseScore2 and an existing midi score for a given piece, you can edit the parts in MuseScore, and extract them in MusicXML format to allow them to be used with the Processing Script, and each run of the script will produce code for each of the 26 parts. These can then be pasted together into a (large) text file. The processing script is set up to produce code that is ready to play with a built in synth in Sonic Pi, using the play command.The notes can be retained, but if you want to play the parts with Sample based voices, you then have to substitute the code used to play them. Luckily, once you have worked out how to do this once, using my functions which access the Sonatina Symphonic Library, you merely have to specify which instrument to use, and make sure that the relevant samples are preloaded at the start of the program. I have written three articles on using the processing script which you can access here

The most difficult part to producing a reasonable performance remains. That is to add some dynamic variation (and possibly tempo variation),as specified in the composer’s score. In principle it is easy to alter the dynamics. You simply alter the  amp: level of a note. In practice, to do this for every note would add a third as much data again: note pitch, note duration and note amplitude. Instead I usually utilise the fx :level command. by using the structure below, you can alter the volume of a part at specified times, and even allow for crescendoes and diminuendos by using amp_slide: parameters. The example shows the level control applied to the wind parts in the Benedictus of the Requiem. wff,wf,wm and wp are the dynamic levels to be used, and the control function, switches between these at various bars during the performance. The note playing section would follow, inside the with_fx structure, and thus affected by it. The in_thread structure, means that the control takes place alongside the note playing and thus affects the volume of the notes, giving the desired dynamics.

with_fx :level do |vw| #wind level
  in_thread do
    use_bpm 51
    wff=1;wf=0.8;wm=0.6;wp=0.4
    control vw,amp: wm
    sleep 12
    control vw,amp: wp #b4
    sleep 58.5
    control vw,amp: wff #b18+2*c+q
    sleep 9.5
    control vw,amp: wp,amp_slide: 4 #b21
    sleep 64
    control vw,amp: wf,amp_slide: 4 #b37
    sleep 5.5
    control vw,amp: wp,amp_slide: 0 #b38+c+q
    sleep 33
    control vw,amp: wf #b46+2c+q
    sleep 14
    control vw,amp: wff #b50+q
    sleep 15.5
    use_bpm 167
  end

  #note playing section follows....
end #of with_fx :level

The setup process is quite tedious, as you have to calculate the gaps (in number of crotchet beats between the various level changes, but it works quite well. I also find that you can nest fx_level changes and their effects multiply together, so I also use overall fixed level effect sections to adjust the relative strengths of different sections of instruments.

There remain further challenges to improve the accuracy of the final sounds. one thing I have not tackled yet is to come up with an easy way to change the style of playing, from for example legato to staccato. You can do this for individual notes by adjusting the ADSR envelope, but it is difficult to do so other than as a global setting for a particular instrument part, without  holding a lot of extra data. At present I set the envelope for a part and it remains constant throughout a movement, and consequently I ignore one or two staccato notes. Something like a with_fx :envelope command might help, offering a similar solution to the way in which the with_fx :level command can be used. Another problem can be very long sustained notes. One or two of the samples I use are not long enough to cover the required length of note, and so it either fades out, or in some cases I repeat the note, which means that it doesn’t sustain through exactly as intended in the score. Again on timing, it is quite difficult to do a ralentando if the parts are all moving and changing pitch at different times. Consequently the ends of some of the movements at present are rather more abrupt than I would like. I have recently written some code which can produce a more realistic ralentando, but It would require some manual changing of the code, replacing parts of the duration lists of notes with new calculated values. I have asked the question whether a bpm_slide command might be a possibility in Sonic Pi, but I think it will present some problems to implement. Also the more processing that is loaded into the program, the harder it becomes to enable the more modest computing power of a Raspberry Pi to be able to play it. Another problem illustrated by this piece is the way in which the choral parts are represented. I use a chorus “wah” sample, which lets you hear notes with some articulation at the beginning, but of course you get nothing of the words. I have already seem some mention in connection with Sonic Pi of looking at vocoder technology whereby words can be generated and sung at different pitches. There are already apps to show this, but I think there is a long way to go before the technology might be useable on Sonic Pi.

So to finish, I hope that you enjoy my attempt at rendering the Mozart Requiem on Sonic PI. It has been challenging but fun to do, and I find the end result quite pleasing, albeit imperfect and capable of further improvement.

You can see a youtube video of the completed Requiem here