Sonic Pi 3.0 arrives. Get going with its MIDI and OSC commands.

It is six months since the last release of Sonic Pi, and you may be forgiven in wondering what has been going on. The answer is that over this Sam Aaron has been working extremely hard to bring midi connectivity to Sonic Pi, as well as enabling the program to receive audio input from external sources. This may sound fairly modest and not a big deal, but it has involved a huge amount of work to ensure that midi input and output are fully synced with Sonic Pi’s extremely accurate timing system. During the same period Sam’s funding to develop Sonic Pi has reduced and comes to an end at Christmas, making it difficult for him to spend the time needed to develop the application further.
The initial release of version 3.0 is for the Mac, and set up details in this article are specific to that version.
The image below shows the new Pro Icon Set (optional) and the new IO tab in the Preference Panel.The new version, was developed as version 2.12.0 then 2.12.0-midi-alpha1 to 8 but because of the major changes involved justified a name bump to version 3.0, named “IO” (input Output). Sonic Pi was developed as a platform where children could be introduced to coding via the medium of music. It became a hit in many schools, but led to many requests to enable it to accept external input or to be able to give output to drive external synths or music modules. The established mechanism over the years (since the 70s) has been to use midi protocols to do this, although of its own it can be a bit limiting and more modern music communication methods also use OSC short for Open Sound Control. In fact Sonic Pi has used always OSC internally to allow the various parts: the GUI (Graphics User Interface) Ruby Server and scsynth (the Super Collider synth module) to communicate. Sam collaborated with Joe Armstrong to use the language Erlang, which is designed for situations where concurrency and the passing of large numbers of messages are concerned, and so it was ideal to handle the scheduling of midi information into Sonic Pi. This was helped by the development of two small applications by Luis Lloret to convert midi messages to osc messages (as they came into Sonic Pi, and osc message to midi information as it left sonic pi for the outside world.
This image of the info screen lists some of new new facilities in Sonic Pi 3.0

Lets look first at midi. Of course for this to work, you need to have either something which can produce midi signals which can be sent to Sonic Pi, or something which can receive and react to them. This will of course cost money if you use hardware devices, but you can try out midi on a Mac, (and on other platforms with appropriate software), by using suitable software programs loaded on to your Mac. One free program that I have used is MuseScore2 which you can download here Alternatively you may have GarageBand on your Mac. This can be used both to receive midi from Sonic Pi to be played using software instruments within MuseScore, or to play a midi file which can be received by Sonic Pi and played using synths within Sonic Pi. Another software synth which can be used is Helm which is donate-ware, i.e. you contribute what you think it is worth. here  Finally you can download a free virtual keyboard called vmpk from here It is a bit old and slightly flaky on Mac OS Sierra but it does work, although it can take a little playing around with to get everything right. The key is the midi connections entry on the Edit Menu. You can configure it to act as a player using the built in fluid synth software synthesiser, or to act as a simple keyboard. Picture below shows it set up as a player. Disable Midi In and change Midi Out to CoreMidi  SonicPi Connect to use as a keyboard input.

All the above software is effectively free (If you are miserly and don’t want to donate to helm). However on the Mac there are two pieces of software which I would recommend considering to buy. The first is an excellent little midi player called  MidiPlayer X  which is available on the App Store for a very modest £1.99 so it doesn’t break the bank. You can drag and drop midi files on to it and either play them in the built in sound module, or you can select any available midi device from a drop down menu. You can loop the file or build a playlist and also alter the tempo and selectively mute channels. I have used it a lot with the development version of Sonic Pi over the last few months. The second item, TouchOSC lets you interact with Sonic Pi from an iPhone or iPad, and there is also an Android version, although I have not tried that. I have built several quite substantial projects using this, and utilising not just the midi connectivity of Sonic Pi 3.0 but also its OSC external messaging facilities. These have included a substantial midi player controller project, that lets you choose synths, envelopes, volumes, pan settings etc of Sonic Pi utilising it as a 16 track midi player, with built in support for a full GM midi drumkit as well. Another simpler project creates a virtual midi keyboard, (editor build screen shown below) with selection of sustain times and choice of synth for Sonic Pi. A third project modified the Hexome Project published in the MagPi Issue 58 to work with Sonic Pi 3.0  This software is a little more expensive at £4.99 but still pretty cheap, although you do need a suitable device from which to run it. Now that Sonic Pi 3.0 is released I hope to publish the software for these, but you can already see development videos of them on my youtube channel here
In all cases, one further vital link is required. Midi signals are passed between devices. Initially these were physical boxes like keyboards, and synthesiser modules and they were connected together by physical 5 pin din leads. When computers came on the scene, external devices started to use usb leads as a means of connection, and if you plug such a device like my small M_Audio Oxygen8 keyboard into the Mac it is recognised as a midi input device. However programs like Sonic Pi have no physical presence, and so the Mac lets you create virtual midi devices which let programs like Sonic Pi or Garage Band talk to Sonic Pi via midi.
The key program to set this up on the mac is called Audio Midi Setup and it can be found inside the Utilities folder in your main Applications folder. Start up Audio Midi Setup and  select Show Midi Studio from the View Menu. (if it says Hide Midi Studio the window should already be visible). (Double click the image below to expand it). Find the icon entitled IAC Driver and double click to open it. If you see a flag saying More information click on it too. (Double click the image below to expand it). Now find the entry IAC Bus 1 and click and rename it to something more memorable. I called mine SonicPi. I also renamed the Port as Connect Finally click the Device is online flag, and the Apply Button.  (See the image below).

You can of course use any names you like, or leave it as the default setting, IAC_Driver with the default port name, IAC Bus 1  Note: Sonic Pi converts all Midi Dev names to lowercase, and replaces any spaces with _ characters, to make life a little less confusing.

If you launch Sonic Pi 3.0 and select IO the new tab on the preferences window, you should see sonicpi_connect listed under the MIDI inputs and MIDI outputs sections. Now launch a suitable program to receive midi from Sonic Pi. I tried GarageBand and MuseScore2. For GarageBand, open the application and choose a new empty project. Select Software Instrument to insert one instrument, set by default to Classic Electric Piano. In Sonic Pi type

midi 72

Run the program, and all being well you should here a piano note played in GarageBand. Sam has achieved his aim of producing something that an 8 year old can do!
You can change the instrument sound in Garage Band. Use the library on the left. here a marimba is chosen.
To do the same thing in MuseScore2, launch that app. (nb an updated version was released very recently) Close the Score Centre popup window. Select the preferences on the MuseScore Menu,  select theNoteInput tab and make sure enable midi input is ticked.Then select the I/O tab and select the SonicPi connect midi input as shown. Note the message about restarting MuseScore. Click OK. Restart MuseScore, again closing the Score Centre popup window and then you can run the midi command from Sonic Pi again. All being well you should here a piano note being played. If not, check that the midi din-plug icon at the top MuseScore window is highlighted (i.e. active) and try again. You can also change the sound on MuseScore from the Mixer Window on the View Menu (NB NOT the Instruments entry on the Edit Menu. That is for something else. The picture below shows a Glockenspiel sound being selected for the Piano output.You can even have MuseScore and GarageBand being played at the same time by Sonic Pi of both are set up together as described!

You can then try a slightly more sophisticated program to play notes chosen at random from a scale.

live_loop :midi_out do
  n=scale(:c4,:major).choose
  midi n,sustain: 0.2
  sleep 0.2
end

Try experimenting by altering changing the sleep and sustain times. You could also choose a different scale, or maybe transpose by using midi n+3 instead of midi n
You can add further control by using the option vel_f: this is followed by a number in the range 0->1 which specifies the velocity with which a standard midi keyboard note is pressed i.e. the volume. Try changing the line to

midi n,sustain: 0.2,vel_f: 0.3

The instrument you hear is entirely controlled by the receiving program. In GarageBand you can choose a different instrument from those available on the library. e.g. a bright punchy synth, or in MuseScore2 open the mixer on the View Menu and choose say a Clavinet from the dropdown list as illustrated above,

So much for midi output. What about input? To handle this Sonic Pi utilises its cue and sync system. The cues are provided by incoming midi events, such as when a note is received from a connected midi device. The midi “cues” can be generated by various actions. When a note turns on and when it turns off again. Also midi control signals can cause events. These might be used to change a synth for example. Another type of event is generated by a midi pitch-bend wheel. All of these and more can be catered for in Sonic Pi. (They can also be sent out from Sonic Pi as well as the simple midi command used to play notes, which in fact automatically uses both note_on and note_off events). The code below will receive midi note_on and note_off events (In fact note_off events can also be interpreted as note_on events with zero volume).

live_loop :midi_input do
  use_real_time #gives fast response by overriding the sched ahead time
  use_synth :tri
  #wait for a note_on event from midi source sonicpi_connect
  b = sync "/midi/sonicpi_connect/*/*/note_on"
  #b is a list with two entries.
  #The note value in b[0] and the velocity value  in b[1] 
  puts b
  #b[1] has range 0-127. Convert to float
  #then scale it to range 0-1 by dividing by 127
  play b[0],release: 0.2,amp: b[1].to_f/127 #play the note 
end

#you can use two variables say b,c to get the information from the sync
#b,c = sync "/midi/sonicpi_connect/*/*/note_on"
#if you prefer to do so, Amend the program appropriately

To use this we can produce a midi input from the virtual keyboard. However first, we can check it out even more quickly, by combining it with our midi send program recently discussed. That gives us this total program.

live_loop :midi_out do
  n=scale(:c4,:major).choose
  v=0.7
  midi n,sustain: 0.2,vel_f: v,port: "sonicpi_connect"
  sleep 0.2
end

live_loop :midi_input do
  use_real_time #gives fast response by overriding the sched ahead time
  use_synth :tri
  #wait for a note_on event from midi source sonicpi_connect
  b = sync "/midi/sonicpi_connect/*/*/note_on"
  #b is a list with two entries.
  #The note value in b[0] and the velocity value  in b[1] 
  puts b
  #b[1] has range 0-127. Convert to float
  #then scale it to range 0-1 by dividing by 127
  play b[0],release: 0.2,amp: b[1].to_f/127 #play the note 
end

I have slightly altered the first part of the program to include a velocity setting, and I’ve also explicitly named the midi port to be used, rather than all of the available ones. Try altering the v setting say to 0.3 and press run again. Or perhaps put n+12 in the midi send line in the first loop and go up an octave. Note if you still have GarageBand and or MuseScore running then they will play along too! You can mute them using the mute icon beside the instrument name in GB and by clicking the midi din icon in MuseScore to toggle off midi input.

To try the keyboard first comment out the first live loop so it doesn’t send any midi. Theb launch vmpk. Go to the Edit Menu and select Midi Connections. Make sure that midi input is not ticked. Select CoreMID for the Midi OUT Driver and choose SonicPi Connect for the Output Midi Connection, (as shown in the screen shots above) Because the keyboard program generates further midi connections, you need to update Sonic Pi. In the Sonic Pi Preferences I/O tab click the Reset MIDI button. This takes a few seconds, but eventually you will see the connection lists updated: there will be some vmkb entries, although we are not using them here. Now run the Sonic Pi program (with the first live_loop commented out) and play the virtual keyboard. You should hear the notes you click with the mouse. Also in the vmpk preferences under the vmpk menu you can enable your mac (typing) keyboard to activate note input. With this selected, you can type the appropriate keys and play Sonic Pi, when the vmkb program is selected. Note that when you quit the vmkb program, it will remove its virtual midi devices, and you will have to reset the Sonic Pi midi setup to keep it going using the Reset MIDI button.

Not only does Sonic Pi 3.0 add midi in and out, it also enables you to add audio input directly into Sonic Pi, where you can modify it by applying  fx like :reverb, or :flanger. To do this you WILL need some additional hardware to feed audio in. I use a Steinberg UR22 MkII audio/midi interface, which gives me two audio in/out channels as well as allowing a hardware midi in/out connection which can be connected to an external midi device such as a keyboard or music module. I have an old Korg X5DR which works with this. However, you CAN try it out using input from the built in microphone on a Mac. Best if you have a set of headphones to listen so that you don’t get feedback! To set things up you use our old friend Audio MIDI Setup again. This time you want to look at the Audio Devices window on the View Menu. Make sure that Built in Microphone is used as the default Input device. (That WILL be the case if you don’t have any additional audio devices connected to your computer). Note if you change the audio selected devices, then Sonic Pi will NOT realise any changes have been made until you restart it. Unlike the midi changes, there is not a reset button to accommodate this, as it happens less frequently! Restart Sonic Pi if you have changed the settings. Now select an empty Sonic Pi 3.0 buffer and type in:

live_audio :mic,amp: 5

Put headphones on, connected to your Mac, and press run. You should be able to hear yourself, and will also see a trace on the Scope if you turn that on in Sonic Pi. You may have to get quite close to the mike to get sufficient input. If it is very quiet, try changing the amp: 5 to amp: 10. Now amend the program as shown below, and rerun.

with_fx :compressor do
 with_fx :reverb,room: 1 do
 live_audio :mic
 end
end
sleep 10
live_audio :mic,:stop #turns off the live_audio feed
#program still running here. Press stop to finish

Here we put in some reverb and also take out the amplification of the live_audio input, but instead put the whole section inside an fx :compressor to boost the overall output. You can already see from this the potential of live_audio if you add the hardware to access other external audio sources. If it is too loud, you can add :amp 0.5 in the compressor line. giving with_fx :compressor, amp: 0.5 do At the end I show a program which sends midi out to an external synth, which sends audio back via live audio to Sonic Pi 3.0 Also incorporated is a synchronised rhythm drum track generated by Sonic Pi. The volume fades up and down, controlled by an at statement also utilising the new .scale method applicable to rings, which scales the values within a ring.

`The final major addition to Sonic Pi 3.0 is the ability to record a buffer. This means that it is possible to record some live_audio input and store it and then reuse it as part of a running program for example to produce a loop. I have already published a video example of this where I record 4 separate buffers with different versions of Frere Jaques, and then use Sonic Pi 3.0 to play them as a round, and then manipulate them, playing the round faster and faster.

Here we’ll try something a bit simpler. Amend the program above by adding commands to record up to 16 beats worth of sound.

with_fx :record,buffer: buffer[:micbuffer,16] do
  with_fx :compressor do
    with_fx :reverb,room: 1 do
      live_audio :mic
    end
   end
end
at 16 do #stop the live audio feed when finished recording
  live_audio :mic,:stop
end
#press stop to finish the program

The extra wrapper around the original program uses the new with_fx :record effect. This uses as a destination the buffer described at the end of the first line. The buffer is named :micbuffer, and the 32 specifies its duration in beats. by default it will be 8 beats long unless specified otherwise. Run the program and record a session of speech. At the normal bpm setting of 60 this can be up to 16 seconds long. If bpm is set to 120 it will only be 16 seconds long. When you have completed the recording, press stop then comment out this section and add at the bottom:

sample buffer[:micbuffer,16]

Run the program again and you should hear your speech back again.
You can have fun with it now. What about Mickey Mouse? Try:_

sample buffer[:micbuffer,16],rpitch: 12

This puts it up an octave and plays twice as fast. One word of warning. Do not change the bpm when you have a recorded buffer or it will reconfigure it for the new bpm value, (as it will have a different duration) and destroy its contents in the process. I had to take account of this when I used the record technique to produce a round version of Freer Jacques using 4 recorded buffers. here

The final new introduction I want to mention is the easy availability of OSC messaging. Sonic Pi 2.11 had the ability to receive OSC based cues, although it wasn’t documented, and the syntax was different. In Sonic Pi 3.0 it is a fully functional feature, as well as the ability to send OSC messages, crucially not only to other programs running on your computer, but also to external computers and programs via network connections. There is the facility to enable or disable this feature on the new IO prefs tab, as it can potentially be a security risk. The facility is particularly useful for interacting with programs such as TouchOSC, which, as the name implies is designed to use bi-directional OSC messaging, although it can also send midi messages as well. To use OSC you need a program that can send and a program that can receive them. There are OSC monitors that can be built or used, but they can require a bit of setting up, so to keep things simple we will use Sonic Pi to send OSC messages to itself.

In a new window type

use_osc "localhost",4559
osc "/hello/play",:c4,1,0.5,:tri

b = sync "/osc/hello/play"
puts b
use_synth b[3]
play b[0],amp: b[1],sustain: b[2],release: 0
#if you prefer it, use four separate variables to get the data

 

when you run this, you should hear the note :c4 played with a :tri synth, with a duration of 0.5 seconds and an amplitude of 1. These values are passed as the data associated with the address /hello/play Sonic Pi prepends the /osc to distinguish the source, and the sync command is triggered by the arrival of the osc message. if you look at the output of the puts b line you will see that the data is passed in a list which is assigned to b (the variable we specified) and then various bits can be accessed using b[0], b[1] etc… Here is the version using separate variables to extract the received data:

use_osc "localhost",4559
osc "/hello/play",:c4,1,0.5,:tri

n,a,d,s = sync "/osc/hello/play"
use_synth s
play n,amp: a,sustain: d,release: 0

Now I amplify this to play my old favourite Frere Jaques entirely with OSC messages.

#Frere Jqeus played on Sonic Pi 3.0 entirely using OSC messages
use_osc "localhost",4559
t=180
set :tempo,t #use set to store values that will be passed to live_loops
use_bpm t

p=0.2;m=0.5;f=1 # volume settings
#store data using set function so that it can be retrieved in live_loops
set :notes,(ring :c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4,:e4,:f4,:g4,:e4,:f4,:g4,:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4,:c4,:g3,:c4,:c4,:g3,:c4)
set :durations,(ring 1,1,1,1,1,1,1,1 ,1,1,2,1,1,2, 0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1, 1,1,2,1,1,2)
set :vols,(ring p,p,m,p,p,p,m,p,p,m,f,p,m,f,m,m,m,m,f,m,m,m,m,m,f,m,f,f,f,f,f,f)
set :synths,(ring :tri,:saw,:fm,:tb303)

live_loop :playosc do # this loop plays the received osc data
  use_real_time
  n,d,v,s,tempo= sync "/osc/hello/play" #retrieve data from OSC message
  
  use_bpm tempo
  use_synth s
  play n,amp: v,sustain: d*0.9,release: d*0.1
end

live_loop :sendosc do
  #retrieve data from main program using get functions
  s=get(:synths).tick
  notes=get(:notes)
  durations=get(:durations)
  vols=get(:vols)
  tempo=get(:tempo)
  use_bpm tempo #set local tempo for this loop
  notes.zip(durations,vols).each do |n,d,v|
    osc "/hello/play",n,d,v,s,tempo #send OSC message with note data
    sleep d
  end
end

The tune, note durations and volumes for each note are held in three rings. The data is sent to the live_loop that will send it using OSC messages using another new feature in Sonic Pi 3. The set and get functions. Previously I, for one, have just declared variables in the main program, and used them inside live loops. Whilst this will work most of the time, it is bad practice, as you might get two live_loops trying to alter variables at the same time and causing confusion.  Using the set and get functions keeps things in order. See section 10.1 of the Sonic Pi built in tutorial for more detail on this. Somewhat oddly the two live loops to send and receive the data via OSC messages are presented in what may seem the wrong order, but this will make sense a bit later on. The Second live loop :sendosc first chooses a different synth to be used on each iteration using a tick to sequence through a list of synths (which you can alter if you like) . Then I use one of my favourite constructs in Ruby which enables you to iterate through two or more (in this case three) lists which are zipped together. The way it works is that on the first iteration n,d and v will hold the first values in the three rings :c4, 1 and 0.2 On the next iteration they hold the second set of values and so on. These are then combined in an OSC message. This has two parts. First an address, which can be anything you like, but each section must be preceded by a / Here we have /hello/play but we could equally have /hi/dothis as long as we look for the right address when we try and receive it. This is followed by a list of data, which can be numbers, strings or symbols. In this case we have four items which are sent. The destination is specified in the separate use_osc command, here giving use_osc “localhost”,4559 This specifies that the local machine (ie the one we are using) will receive the message on port 4559. (You will see this port specified in the new I/O prefs. Sonic Pi is set up to monitor this port. We sleep for the duration of the note and then the next OSC message is sent.

Turning to the receiving live_loop :playosc this waits for sync events to occur with the format “/osc/hello/play” Remember that Sonic Pi prepends the /osc to signify where the sync event has originated from. as described in the first OSC program the data is extracted to the variables n,d,v,s and tempo and the various parts are then used to specify the synth, bpm note amp: and duration settings to use. In this case an envelope is used with separate sustain and release times. We don’t need to specify the time between notes as this is taken care of by the sync, which depends on the time between the OSC messages set by the sending loop. The received tempo is used to set the local bpm, so the timing of the note durations are interpreted correctly.  Hopefully when you run this program it will play Frere Jaques for you continuously, cycling through the synths specified.

Now for more of a good thing!. Add a third loop with a delay of 8 beats to the end of the program

live_loop :sendosc2,delay: 8 do
  #retrieve data from main program using get functions
  s=get(:synths).tick
  notes=get(:notes)
  durations=get(:durations)
  vols=get(:vols)
  tempo=get(:tempo)
  use_bpm tempo #set local tempo for this loop
  notes.zip(durations,vols).each do |n,d,v|
    osc "/hello/play",n,d,v,s,tempo #send OSC message with note data
    sleep d
  end
end

This is identical to the other send program, except in name and in the delay: 8 in the first line which means that it starts 8 beats after the other live_loops. If you now play the program again, both sendosc loops will broadcast the tune, but the second one delayed by the time for the first two lines of Frere Jaques to play. Because they both send to the same OSC address their information  streams will both be picked up and played by the :playosc loop.

Finally, if you are lucky enough to have access to a second machine with Sonic Pi 3 on it, you can copy the live_loop :playosc code to the second machine, and adjust the main program to send OSC messages to it and it will join in, fully synchronised. I’ve just had two Macs and a Raspberry Pi running my initial build of Sonic Pi 3 all playing synchronised together by OSC messages. Sounds Great. The change you have to make to the main program, is to the osc message lines. An example is shown below, for a machine on ip address 192.168.1.128:

osc "/hello/play",n,d,v,s,tempo #sends to local machine
osc_send "192.168.1.128",4559,"/hello/play",n,d,v,s,tempo #sends to remote machine

You can add a third machine by adding a third live_loop :oscsend3,delay: 16 but otherwise the same as the other two send loops. This could use the appropriate address for that machine. If you want you can use one send loop and put two or more appropriate or osc_send commands one after the other, so that they play the same notes together.
Here is a link to tweet video showing an early (naughty) version of the program in action, which didn’t use set and get to transfer the note information!

Finally I promised earlier to include a program incorporating midi, live audio and a locally generated rhythm track all nicely synchronised together.

#Sonic Pi 3.0 Example showing midi out, live_audio in
#synchronised drum track and use of at to control volumes
#written by Robin Newman, July 2017
use_debug false
use_osc_logging false
use_midi_logging false
use_bpm 100
#st up rhythm tracks and volumes 0->9
set :bass_rhythm,ring(9, 0, 9, 0,  0, 0, 0, 0,  9, 0, 0, 3,  0, 0, 0, 0)
set :snare_rhythm,ring(0, 0, 0, 0,  9, 0, 0, 2,  0, 1, 0, 0,  9, 0, 0, 1)
set :hat_rhythm,ring(5, 0, 5, 0,  5, 0, 5, 0,  5, 0, 5, 0,  5, 0, 5, 0)

with_fx :level do |v|
  control v,amp: 0 #start at 0 volume
  sleep 0.05 #allow amp value to settle without clicks
  at [1,26],[1,0] do |n|
    control v,amp: n,amp_slide: 25 #fade in and out over 25 beats each
  end
  
  
  live_loop :drums do
    sample :drum_bass_hard, amp: 0.1*get(:bass_rhythm).tick
    sample :drum_snare_hard, amp: 0.1*get(:snare_rhythm).look
    sample :drum_cymbal_closed,amp: 0.1*get(:hat_rhythm).look
    sleep 0.2
    stop if look==249
  end
  
  #audio input section
  
  with_fx :compressor, pre_amp: 3,amp: 4 do
    #audio from helm synth fed back using loopback utility
    live_audio :helm_synth,stereo: true #audio from CM bells selected on helm synth
  end
  
end #fx_level

at 30 do  #stop audio input at the end
  live_audio :korg,:stop
end

#send out midi note to play (sent to helm synth CM bells)
live_loop :midi_out,  sync: :drums do
  tick
  n=scale(:c4,:minor_pentatonic).choose
  vel=0.7
  midi n,sustain: 0.1,vel_f: vel,port: "sonicpi_connect",channel: 1
  sleep 0.2
  stop if look==249
end

This program requires an external synth (I used the helm software synth and fed the midi to it via the sonicpi_connect interface we set up earlier. I used the utility LoopBack (freee to try out) to set up an audio input connection, and set this as the default audio input in Audio MIDI setup so that Sonic Pi selected this (rather than the built-in microphone that we used before) when it started. Remember you need to restart Sonic Pi if you want to alter either where its sound output is fed OR where it can receive audio input from. It always uses the system selected audio devices active when it starts up.
The program has a live loop each playing percussion samples. These are controlled by rings containing the volume setting for each iteration of the loops which are synced together. There is a live_loop which sends midi notes out to the helm synth, which is also synced to the percussion loops. The audio output of the Helm is fed back to Sonic Pi as live_audio via the LoopBack interface. Also of interest is the use of the at command which fades in the percussion and live_audio volume by controlling a with_fx :level wrapper placed around the :drums live_loop and the live_audio inputs. When the program is run, The total sound output builds from zero to a maximum, and it is then faded out again, with  the loops being stopped at an appropriate point by counting the elapsed ticks in each case.
You will have to adjust the settings in the program to suit any external midi/audio interface that you may have. Even if you can’t run it, some of the techniques employed may be useful to you.

The FrereJaques program and the Helm with percussion program are also on my gist site  here and here and you can hear their output on sound cloud here and here

 

Well this has been quite a quick gallop through what Sonic Pi 3.0 has to offer. I hope you find some of the examples useful in getting you going. I think that the program adds fantastic new opportunities to Sonic Pi, and Sam is to be congratulated on the amazing job he has done. I know that it has involved a huge effort, many long nights and frustrations, and that few appreciate just what has been involved. If you like Sonic Pi and use it, and particularly if you want to see it developed further consider supporting Sam via the patreon site on https://patreon.com/samaaron If enough people sponsor a modest monthly amount then the funds can be gained to enable him to devote the time to this mammoth job that it needs. As a supporter you will also gain access to interim development releases of Sonic Pi along the way.

I have quite a number of published resources for Sonic Pi produced over the last few years. This blog is one. Also available are sound file on soundcloud  here here and here
Many of these have associated software on my gist site I also publish videos on youtube

Now that Sonic PI 3.0 is released, I have quite a bit of resource material which makes use of it, so watch this space for further articles on its great capabilities

twitter address @rbnman

Advertisements

High time for a post! What’s happening with Sonic Pi and how you can help

I feel somewhat guilty that I have not added anything new to this blog for some time. This is not because I have not been busy using Sonic Pi: far from it: but rather that the things I have been involved in have not lent themselves readily to any articles. Part of this is becuase I always work with the cutting edge version of Sonic Pi, and most people do not have access to it. However, one bit of good news on this front is that if you use a Mac, then you CAN easily obtain a development copy of Sonic Pi 2.12.0-alpha-midi by supporting Sam Aaron, the creator of Sonic Pi on the patreon site. Sam needs financial support to coninue developing Sonic Pi, so if you like it then go to https://www.patreon.com/samaaron and give your support. As a reward you will have access to downloading a recent version of 2.12.0-alpha-midi. This will let you try out using midi in and out and OSC out messages from Sonic Pi.

I build the latest version on my Mac, and have been using midi with Sonic Pi since Christmas, and in my latest work have produced a program to play midi files and or keyboard input from Sonic Pi, with control of the Synths used and some of gtheir parameters using a TouchOSC front end running on my iPad. This gives a new slant to using Sonic Pi, not so much for live coding but rather for live playing, with real time alteration of the synths used and what they sound like as a piece plays. I also use a little midi player called Midi Player X available on the Mac for £1.99 from the App Store, which is ideal for use with Sonic Pi.

I have developed two version of the program, the second one being more versaitle, and produced videos of each program in action to give an idea of what it can do. It does require a reasonable understanding of how things work to set it up and use it, becuse you have to set up and confiugre midi ports to enable connections between the various parts,  and I have not at present published the program for this reason, and also becuase this is an alpha version of Sonic Pi, and some of the commands may be altered (some already ahve been!) before it reaches beta and then release stages. However I may in afuture post give some useful ideas to get you going on using Midi with Sonic Pi. I have spent many hours playing with it, studying the code, and working out hwo the commands work. There is increasingly socumentation, but it is not all present at this moment in time.

So why not view the videos below, then resolve to support the further development of Sonic Pi AND get a copy of the development version to try out on a Mac.

Here is a picture of the latest TouchOSC screen, and there are links to the two videos below.

Video for version 1

Video for version 2

Patreon LInk for Sam

Sonic Pi remote gui to control play/stop and the starting position within a piece

Following on from my previous post, and some “playing” with the midi facilities in the latest Sonic Pi 2.12.0-midi-alpha1 I developed a processing script to control the transport mechanism in Logic Pro which I was using to play the Sonic Pi produced midi. I have now further developed this, and applied its use to the existing release version Sonic Pi 2.11.1 so that it can provide a remote control functionality. (You can also use it on the version 2.11 with a slight modification to two or three lines. This is the current release version on the Raspberry Pi). With the addition of a few lines to any existing Sonic Pi music program file it can provide play and stop functions, which can be used repeatedly without touching Sonic Pi . Effectively the stop function stops the program from running and then re-runs it so that it awaits a subsequent “play” command from the remote control. If the music is a linear piece of say 200 bars, then with some more pervasive changes, the music code can be written in such a way that it is possible to start at the beginning of any specified bar in the music, and this bar selection can also be done from the remote control.

In this article, I am going to show two separate pieces which utilise the remote control. The first is a rendition of a four part round of Frére Jaques with a twist! At the end of each repeated line the tempo increases until the fourth part has finished playing, then the round plays again, this time starting at the new fast tempo, and decreasing at the end of each repeated line until the fourth entry finishes at the original slow tempo. The piece is 28 bars in length, and you can start playing at the beginning of any designated bar, controlled by the remote interface. To see how this is developed, first here is the code for a play-through of Frére Jaques for a single part, first speeding up, then slowing down again

#FrereJaques-1part.rb
a1=[ ]
b1=[ ]
a1[0]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
a1[1]=[:e4,:f4,:g4,:e4,:f4,:g4]
a1[2]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
a1[3]=[:c4,:g3,:c4,:c4,:g3,:c4]
a1[4]=[:r]*8
a1[5]=[:r]*8
a1[6]=[:r]*8+a1[0]
a1[7]=a1[1]
a1[8]=a1[2]
a1[9]=a1[3]
a1[10]=a1[4]
a1[11]=a1[5]
a1[12]=[:r]*8
b1[0]=[1,1,1,1,1,1,1,1]
b1[1]=[1,1,2,1,1,2]
b1[2]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
b1[3]=[1,1,2,1,1,2]
b1[4]=[1]*8
b1[5]=[1]*8
b1[6]=[1]*8+b1[0]
b1[7]=b1[1]
b1[8]=b1[2]
b1[9]=b1[3]
b1[10]=b1[4]
b1[11]=b1[5]
b1[12]=[1]*8

c1=[100,120,140,160,180,200,220,200,180,160,140,120,100]
use_synth :beep
in_thread do
 for i in 0..a1.length-1
 use_bpm c1[i]
 for j in 0..a1[i].length-1
 play a1[i][j],sustain: b1[i][j]*0.9,release: b1[i][j]*0.1
 sleep b1[i][j]
 end
 end
end

In order to accommodate the tempo changes, the notes are held in two bar sections, inside an array a1[ ]. Thus the first two bars of the tune :c4, :d4, :e4, :c4, :c4, :d4, :e4, :c4 are held in the first entry of this array, a1[0]. The corresponding note lengths, are held in the first entry of array b1[ ] in b1[0] consisting of 8 equal notes of duration 1 (a crotchet). These two bars will be played at the first temp in the list
c1=[100,120,140,160,180,200,220,200,180,160,140,120,100]
namely 100 bpm. At the end of the data section, a synth is chosen (:beep) and the notes are played in a thread, as we will want all four parts to play together later on. Two”for” loops are used to play the two bar sections one after another. The outer loop selects the tempo in bpm for each two bar section, and the inner loop uses a play command to play the note with its accompanying duration, with sustain, release and pan parameters included. Each two lines the tempo is bumped up by 20. When the tune finishes playing (a1[3] and corresponding b1[3]) then three further two bar sections follow, each playing rests as the remaining three parts finish playing. In fact  the last of these three additional sections is 4 bars in length (a3[6] and b3[6]). The additional 2 bars consists of the first line of the tune (a1[0] and b1[0] which is played again, this time at the tempo of 220, and you will see that subsequent two bar sections consist of the remainder of the tune being played again, but this time the value of the tempo is reduced by 20 for each section, until the final section is played at the original tempo of 100. As before three “rest” sections are included, so allowing the other three parts to finish off.

If we now add in the other three parts, you will see that they are very similar to the first part. The only difference is that the tune is shifted for each part so that it actually starts play 2 bars later than for the previous part. So part2 which is held in the arrays a2[ ] and b2[ ] has a rest section for its initial 2 bars before the tune starts playing this time in a2[1] compared to a1[0] for the first part. Similarly the actual tune for part3 start playing in a3[2][and for the 4th part in a4[3]. The complete tune playing code is show below for all 4 parts. Each part has a different synth and pan setting to make them stand out.

#4 Frere Jaques round played twice, speeds up then slows down
p1=-1;p2=-0.33;p3=0.33;p4=1

a1=[]
b1=[]
a1[0]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
a1[1]=[:e4,:f4,:g4,:e4,:f4,:g4]
a1[2]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
a1[3]=[:c4,:g3,:c4,:c4,:g3,:c4]
a1[4]=[:r]*8
a1[5]=[:r]*8
a1[6]=[:r]*8+a1[0]
a1[7]=a1[1]
a1[8]=a1[2]
a1[9]=a1[3]
a1[10]=a1[4]
a1[11]=a1[5]
a1[12]=[:r]*8
b1[0]=[1,1,1,1,1,1,1,1]
b1[1]=[1,1,2,1,1,2]
b1[2]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
b1[3]=[1,1,2,1,1,2]
b1[4]=[1]*8
b1[5]=[1]*8
b1[6]=[1]*8+b1[0]
b1[7]=b1[1]
b1[8]=b1[2]
b1[9]=b1[3]
b1[10]=b1[4]
b1[11]=b1[5]
b1[12]=[1]*8

c1=[100,120,140,160,180,200,220,200,180,160,140,120,100]
use_synth :beep
in_thread do
 for i in 0..a1.length-1
 use_bpm c1[i]
 for j in 0..a1[i].length-1
 play a1[i][j],sustain: b1[i][j]*0.9,release: b1[i][j]*0.1,pan: p1
 sleep b1[i][j]
 end
 end
end

a2=[]
b2=[]
a2[0]=[:r]*8
a2[1]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
a2[2]=[:e4,:f4,:g4,:e4,:f4,:g4]
a2[3]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
a2[4]=[:c4,:g3,:c4,:c4,:g3,:c4]
a2[5]=[:r]*8
a2[6]=[:r]*8+a2[0]
a2[7]=a2[1]
a2[8]=a2[2]
a2[9]=a2[3]
a2[10]=a2[4]
a2[11]=a2[5]
a2[12]=[:r]*8
b2[0]=[1]*8
b2[1]=[1,1,1,1,1,1,1,1]
b2[2]=[1,1,2,1,1,2]
b2[3]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
b2[4]=[1,1,2,1,1,2]
b2[5]=[1]*8
b2[6]=[1]*8+b2[0]
b2[7]=b2[1]
b2[8]=b2[2]
b2[9]=b2[3]
b2[10]=b2[4]
b2[11]=b2[5]
b2[12]=[1]*8

c2=[100,120,140,160,180,200,220,200,180,160,140,120,100]
use_synth :blade
in_thread do
 for i in 0..a2.length-1
 use_bpm c2[i]
 for j in 0..a2[i].length-1
 play a2[i][j],sustain: b2[i][j]*0.9,release: b2[i][j]*0.1,pan: p2
 sleep b2[i][j]
 end
 end
end

a3=[]
b3=[]
a3[0]=[:r]*8
a3[1]=[:r]*8
a3[2]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
a3[3]=[:e4,:f4,:g4,:e4,:f4,:g4]
a3[4]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
a3[5]=[:c4,:g3,:c4,:c4,:g3,:c4]
a3[6]=[:r]*8+a3[0]
a3[7]=a3[1]
a3[8]=a3[2]
a3[9]=a3[3]
a3[10]=a3[4]
a3[11]=a3[5]
a3[12]=[:r]*8
b3[0]=[1]*8
b3[1]=[1]*8
b3[2]=[1,1,1,1,1,1,1,1]
b3[3]=[1,1,2,1,1,2]
b3[4]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
b3[5]=[1,1,2,1,1,2]
b3[6]=[1]*8+b3[0]
b3[7]=b3[1]
b3[8]=b3[2]
b3[9]=b3[3]
b3[10]=b3[4]
b3[11]=b3[5]
b3[12]=[1]*8

c3=[100,120,140,160,180,200,220,200,180,160,140,120,100]
use_synth :tri
in_thread do
 for i in 0..a3.length-1
 use_bpm c3[i]
 for j in 0..a3[i].length-1
 play a3[i][j],sustain: b3[i][j]*0.9,release: b3[i][j]*0.1,pan: p3
 sleep b3[i][j]
 end
 end
end

a4=[]
b4=[]
a4[0]=[:r]*8
a4[1]=[:r]*8
a4[2]=[:r]*8
a4[3]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
a4[4]=[:e4,:f4,:g4,:e4,:f4,:g4]
a4[5]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
a4[6]=[:c4,:g3,:c4,:c4,:g3,:c4]+a4[0]
a4[7]=a4[1]
a4[8]=a4[2]
a4[9]=a4[3]
a4[10]=a4[4]
a4[11]=a4[5]
a4[12]=[:c4,:g3,:c4,:c4,:g3,:c4]
b4[0]=[1]*8
b4[1]=[1]*8
b4[2]=[1]*8
b4[3]=[1,1,1,1,1,1,1,1]
b4[4]=[1,1,2,1,1,2]
b4[5]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
b4[6]=[1,1,2,1,1,2]+b4[0]
b4[7]=b4[1]
b4[8]=b4[2]
b4[9]=b4[3]
b4[10]=b4[4]
b4[11]=b4[5]
b4[12]=[1,1,2,1,1,2]

c4=[100,120,140,160,180,200,220,200,180,160,140,120,100]
use_synth :saw
in_thread do
 for i in 0..a4.length-1
 use_bpm c4[i]
 for j in 0..a4[i].length-1
 play a4[i][j],sustain: b4[i][j]*0.9,release: b4[i][j]*0.1,pan: p4
 sleep b4[i][j]
 end
 end
end

So now that we have the 4 part tune set up, how can we control it, so that we can play and stop at will, and also specify at which bar of the 28 available we start playing? In order to do this we need to supply externally two pieces of information. First a play/stop code (which is set to 1 to play and -1 to stop) and secondly a bar start code bs (which is set to the bar number from 1…28). In fact we can set bs greater than 28, and the program will determine that that number is too big and clamp it to a maximum of 28 (or whatever the relevant value is for the piece we are playing). So we have two problems to look at. First, generating the play/stop code and the bs code and sending them to Sonic Pi from our remote GUI, and secondly receiving and decoding them in the Sonic Pi piece we are playing, and acting upon them.

As a break from Sonic Pi, we will look first at the remote control GUI. Last summer, I was introduced to processing by Hiroshi Tachiban, who had written a script using this application to convert midi files to Sonic Pi code, via an intermediate MusicXML format. I have used and developed this script, and in the process have begun to appreciate the power of Processing. In the conversion script its graphical properties are not utilised, but looking at the examples on the processing site (processing.org) and prompted by others who were using it to generate OSC commands (which are utilised by Sonic Pi to communicate between its various parts, GUI, Server, Scsynth). I decided to try and use it for this purpose. It also has the advantage of being easily installable on Macs, Windows PC, linux and Raspberry Pi, so it can be utilised on all the Sonic Pi platforms. Basically I deiced to use a very small GUI screen which had 5 small clickable rectangles on it. Two of these were used to select Play or Stop, and the remaining three let you increase the current (displayed) bar start, or decrease it, or reset it to bar 1 (the start of the piece). The current bar start was held internally in the GUI code, and transmitted to Sonic Pi whenever the Play or Stop rectangle was clicked. This enabled subsequent Play commands to start at the current bar start setting repeatedly, without having to set it each time. Processing refers to the graphics window it sets up using an xy coordinate system where the origin x=0,y=0 is top left, and x increasing moves right across the screen, and y increasing moves down the screen. I set a small screen size of 100 x 120 pixels, as I didn’t want the GUI to use much screen real estate. Processing has built in routines to determine the mouse position mouseX and mouseY. It can also determine when the mouse button is pressed, and also when the mouse is moved. The program for the processing window is called a sketch. Those of you who have used the Arduino interface on a Mac or Windows will be at home with the programming interface as the Arduino interface is written using processing. Both use JAVA. Unfortunately the code would not render properly in WordPress, so you can see it by using this paste-link instead

To try out the program, you first need to download the processing code from processing.org There are versions available for Mac Windows PC linux and Raspberry Pi (use the linux ARM version). Unzip and run the application. Create a new sketch called StartBarSelector and paste in the code. You will also need to load in the oscP5 library from the Sketch–>Import Library— link on the Edit menu. I suggest you run the app initially from the processing ide. Later when you are sure it is working OK you can create a standalone app using Export Application… from the File Menu.

If you view the code you will see that the first entry is import oscP5.*;
This utilises an external library oscP5 which you must add as detailed above. The script creates a small (100x120pixel) window which contains 5 small rectangles. These are set up as clickable zones, and trigger various operations when they are clicked. The three rectangles in a vertical line, control the value of a variable bs. This is initially set to 1, but can have its value increased by clicking the top rectangle marked bs+. The bottom rectangle marked bs- decreases its value, provided that it is greater than 1, and the central rectangle resets its value to 1. When one of these rectangles is clicked it turns green and remains so until the mouse button is released. The rectangle on the right is filled in red and has the label play. When it is clicked it sends the value 1 to Sonic Pi, together with the current value of the bs (bar start) variable. When it is clicked it will also highlight the left hand rectangle in red and reveal the caption stop, whilst removing its own caption and recolouring the play rectangle to the background colour. When the blue left hand rectangle (∫) is clicked it sends the value -1 to Sonic Pi, together with the current value of the bs variable.
Much of the code deals with the generation of the rectangles and the captions, and also the detection of the mouse pointer position when the mouse is clicked.  The initial function setup creates the window and the rectangles and sets their initial states and colours. It also sets up and OSC udp socket which is used to communicate with Sonic Pi which is assumed to be running on the local machine, although the controller will work with a remote Sonic Pi, provided that it is run with the appropriate address inserted for SonicPi in the program. The display is set to refresh 60 times a second.

The second function sendOscData composes the OSC message to be sent to Sonic Pi. This consists of the “address” /transport followed by two integer arguments tr, which is either 1 for play or -1 for stop, and bs which is an integer specifying the start bar to use.

The third function draw saves the mouse position given by mouseX and mouseY to two variables mx and my and then proceeds through a series of tests which detect whether the mouse was clicked inside one of the five rectangles, and if so, sets the values of the tr and bs variables as appropriate. In the case of the three “bs” rectangles it also shades the rectangle green, by repainting it in that colour, and in the case of the stop and play rectangles it clears the current rectangle colour and caption and enables the colour fill and caption for the other one, again repainting the rectangles and pasting a background filled area to hide the captions and then rewrite the appropriate one. These two sections of code also send OSC messages with the current data to Sonic Pi.

After these sections of code which test where the mouse has been clicked, 6 lines of code paste over the old displayed bs value and then repaint the current value. When the mouse button is released, the bottom section of the draw loop activates when the mouse button is released, and removes the green fill from all of the “bs” rectangles. A flag variable also makes sure that only one OSC message is sent when a click in the play or stop rectangles is detected, even though the mouse button may remain depressed for several passes of the draw loop. Finally a small delay is added, which is set so that the bs value will increase fairly rapidly as the bs+ button is clicked and the mouse held down, whilst allowing a sufficient delay so that it can be quickly clicked to give a single increment in the value.

Now we turn our attention back to Sonic Pi. In order to detect the OSC messages being sent from the BarStartSelector GUI, I have used an undocumented feature of Sonic Pi, which is that it can respond to cues received in the form of OSC messages which are sent to port 4559. There is a slight difference in that response between SP version 2.11 (the current release version on the Raspberry Pi) and SP version 2.11.1 (the current release version on the Mac and Windows PC). However it only requires a minor change to a few lines in the program. Here Sam Aaaron the Lead Developer of Sonic Pi would want me to point out that this is an experimental feature and that it may well change in form, or even disappear in future versions, although currently it is still the same in the latest development version 2.12.0-midi-alpha, and of course is fixed in SP 2.11 and 2.11.1

A simple program which can be used to test the operation of the external GUI is shown below. It also illustrates the small code difference required between SP version 2.11 and 2.11.1 in decoding the received messages.

#test communication with StartBarSelector processing GUI
define :ver do
 return version.to_s.split('.')
end

loop do
 v= version
 tr=0
 until tr==1
 s=sync '/transport'
 if version="v2.11.1" or ver[2].to_i > 11
 tr=s[0]
 bs=s[1]
 else
 tr=s[:args][0]
 bs=s[:args][1]
 end
 
 puts "play/stop tr variable is "+tr.to_s
 puts "bs bar start variable is "+bs.to_s
 end
 
 until tr==-1
 s=sync '/transport'
 if version="v2.11.1" or ver[2].to_i > 11
 tr=s[0]
 bs=s[1]
 else
 tr=s[:args][0]
 bs=s[:args][1]
 end
 
 puts "play/stop tr variable is "+tr.to_s
 puts "bs bar start variable is "+bs.to_s
 end
end

The operative line is s=sync ‘/transport’
This acts like a standard sync command in Sonic Pi, only this time it wait until an OSC message with the address ‘/transport’ is received on port 4559. Depending on the SP version the arguments added to this OSC message are extracted and displayed by puts statements. You can see how tr is set to 1 when play is clicked and -1 when stop is pressed and in each case the bs value is also sent.

So all that remains is to add similar code to the program we wish to control and then to figure out ways of getting the program to stop and to play again when a stop and play command are received in succession. The other problem to solve is how to set the program to start at a specified bar rather than at the beginning. The stop/play problem is solved by means of an external ruby helper program. This utilises the sonic-pi-cli gem which enables you to send commands to sonic pi from a command line. First you have to install it. On a Mac this should be done using the system ruby installation and for the root user, using the command
sudo /usr/bin/gem install sonic-pi-cli
This should install /usr/local/bin/sonic_pi which should automatically be placed in your PATH so that it can be accessed with sonic_pi. If you are using a Raspberry Pi then you can use the procedure below:
Start a terminal window and type the following:

sudo mkdir /var/lib/gems
sudo chown pi /var/lib/gems
sudo chown pi /usr/local/bin
gem install sonic-pi-cli

When the install has completed, reset the ownership of /usr/local/bin

sudo chown root /usr/local/bin

It is a good idea to test it at this stage. Get any piece you like running in Sonic Pi, then from a command line type sonic_pi stop and the sonic pi cli should stop the program from running. Also if you type sonic_pi by itself you should get some information back from the program.
On my system, Sonic Pi programs to play are stored in ~/Documents/SPfromXML and the program we are going to control is called FrereJaquesControlled-RF.rb with a path of
~/Documents/SPfromXML/FrereJaquesControlled-RF.rb
I put the -RF on the end of the file name to remind me that the file is too long to run in a Sonic Pi buffer, and requires to be run from Sonic Pi using the command
run_file “~/Documents/SPfromXML/FrereJaquesControlled-RF.rb”
Similarly the helper program which I use is stored in the same folder and is called FrereJaquesControlled-RFauto.rb It has the contents below

#!/usr/bin/ruby
`/usr/local/bin/sonic_pi stop`
`/usr/local/bin/sonic_pi "run_file '~/Documents/SPfromXML/FrereJaquesControlled-RF.rb'"`

This sends two commands to Sonic Pi via the sonic_pi gem. The first causes Sonic Pi to stop all running programs. The second issues a run_file command which restarts the FrereJaques program again from the beginning.
Now we are in a position to build up the FrereJaquesControlled-RF program. We start with the four part round introduced at the beginning of this post. We have seen how we can wait for and detect a sync OSC message which can send us a command to start the program, but also the bar at which to start. So the next problem is how can we set up the program to react to this?
What we need to do is to determine which of the 13 sections a1[0] to a1[12] we need to select, and whether we are starting at the beginning of that selected section or in the middle of it. In order to help us we add some functions at the beginning of the program. The complete FrereJaquesControlled-RF.rb program is listed below

#FrereJaquesControlled.rb
restart="~/Documents/SPfromXML/FrereJaquesControlled-RFAuto.rb"
use_debug false #turn off log_synths
use_arg_checks false #turn off log_cues
bs=1 #starting bar number: give it an initial value here
bpba=[4]*13 #setup up list of section beats per bar

#puts bpba
st=[] #holds info for start section and remaining bars to process: set global here
#part pan positions
p1=-1;p2=-0.33;p3=0.33;p4=2

############### define functions used in the script
define :numbeats do |durations| #return number of crotchet beats in a note durations list
 l=0.0
 durations.each do |d|
 l+=d
 end
 return l
end

#find starting section, and number of bars in that section to be processed
#to determine the starting note index
define :startDetails do |bn,bNumberSecStart,durations|
 startSecIndex=0
 remainingBars=bn
 #iterate until remaning bn is within the section
 while bn>bNumberSecStart[startSecIndex]
 remainingBars=bn-bNumberSecStart[startSecIndex]
 startSecIndex+=1
 end
 #return the section to start playing and number of bars to determine starting note index
 return startSecIndex-1,remainingBars
end

define :getmatchd do |bn,bpb,durations| #works out the note index for a given bar number
 matchbeat=(bn-1)*bpb #target number of beats to find
 l=0.0;x=0
 until l>=matchbeat || (l-matchbeat).abs < 0.0625 #0.0625 is smallest quantisation to consider
 l+=durations[x]
 x+=1
 end
 return [x ,l-matchbeat] #return the matched beat note index, plus sleep for tied note (if any)
 #nb if the bar start coincides with a tied note, then the part will start with the next
 #note and a sleep command will be issued for the remaining duration of the tied note
end

define :ver do
 return version.to_s.split('.')
end
##########################
#wait for an OSC cue to be received from the Processing GUI sketch
#This sends two parameters: First controls Play (1) Stop (-1)
#second gives requested bar start number
tr=0
until tr==1 #wait for PLAY cue from processing GUI (first parameter will be set to 1)
 s=sync '/transport'
 if version="v2.11.1" or ver[2].to_i > 11
 tr=s[0]
 bs=s[1]
 else
 tr=s[:args][0] #tr governs play/stop value is 1 for play -1 for stop
 bs=s[:args][1] #bs is start bar,the second parameter received
 end
end

puts "BS selected is "+bs.to_s
##########################
#start polling for an OSC cue to stop playing from the Processing GUI sketch
#this runs continuously in a thread
in_thread do #this thread polls for an OSC cue to stop the program
 tr=0
 until tr==-1 #the first parameter will be set to -1 for a STOP signal
 s=sync '/transport'
 if version="v2.11.1" or ver[2].to_i > 11
 tr=s[0]
 bs=s[1]
 else
 tr=s[:args][0] #tr governs play/stop value is 1 for play -1 for stop
 bs=s[:args][1] #bs is start bar,the second parameter received
 end
 end
 #stop command detected
 puts"stopping"
 puts "running sonic pi cli script to restart"
 
 system(restart+" &") #run the auto script to stop and rerun the code
end
##########################
with_fx :reverb, room: 0.8 do
 with_fx :level,amp: 0.7 do
 #part 1 data
 a1=[]
 b1=[]
 a1[0]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
 a1[1]=[:e4,:f4,:g4,:e4,:f4,:g4]
 a1[2]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
 a1[3]=[:c4,:g3,:c4,:c4,:g3,:c4]
 a1[4]=[:r]*8
 a1[5]=[:r]*8
 a1[6]=[:r]*8+a1[0]
 a1[7]=a1[1]
 a1[8]=a1[2]
 a1[9]=a1[3]
 a1[10]=a1[4]
 a1[11]=a1[5]
 a1[12]=[:r]*8
 b1[0]=[1,1,1,1,1,1,1,1]
 b1[1]=[1,1,2,1,1,2]
 b1[2]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
 b1[3]=[1,1,2,1,1,2]
 b1[4]=[1]*8
 b1[5]=[1]*8
 b1[6]=[1]*8+b1[0]
 b1[7]=b1[1]
 b1[8]=b1[2]
 b1[9]=b1[3]
 b1[10]=b1[4]
 b1[11]=b1[5]
 b1[12]=[1]*8
 c1=[100,120,140,160,180,200,220,200,180,160,140,120,100]
 ###################### calculate starting data

 #calc bar offset for start of each tempo change. Held in bNumberSecStart list
 bNumberSecStart=[]
 bNumberSecStart[0]=0
 bNumber=0
 b1.length.times do |z|
 bNumber+= numbeats(b1[z])/bpba[z]
 bNumberSecStart[z+1]=bNumber
 end
 #puts bNumberSecStart #for debugging
 #calc number of bars inthe piece
 bmax=bNumberSecStart[b1.length]
 puts "Total number of bars="+bmax.to_s
 #adjust requested bar start number if too large
 if bs>bmax
 bs=bmax
 puts "Start bar exceeds piece length: changed to :"+bs.to_s
 end
 #calculate info for starting sector containing bar start requested,
 #and number of remaining bars to process to get starting index
 st=startDetails(bs,bNumberSecStart,b1)
 startSec=st[0]
 remainingBars=st[1]
 puts "Start Section="+st[0].to_s
 puts "Remaining Bars to find starting index="+st[1].to_s
 puts

 ################### now ready to process an play each part in turn (played together in threads)
 #each part is processed in exactly the same way

 #calc starting index and any sleep for tied notes for part 1
 sv1=getmatchd(remainingBars,bpba[startSec],b1[startSec])
 
 puts "1: "+sv1.to_s #print start index and sleep time
 use_synth :beep
 in_thread do
 for i in startSec..a1.length-1
 use_bpm c1[i]
 sleep sv1[1] #sleep for tied note (>0 if tied)
 for j in sv1[0]..a1[i].length-1
 play a1[i][j],sustain: b1[i][j]*0.9,release: b1[i][j]*0.1,pan: p1
 sleep b1[i][j]
 end
 sv1=[0,0] #reset so subsequent iterations of j loop in full and no tied sleep
 end
 end


 a2=[]
 b2=[]
 a2[0]=[:r]*8
 a2[1]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
 a2[2]=[:e4,:f4,:g4,:e4,:f4,:g4]
 a2[3]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
 a2[4]=[:c4,:g3,:c4,:c4,:g3,:c4]
 a2[5]=[:r]*8
 a2[6]=[:r]*8+a2[0]
 a2[7]=a2[1]
 a2[8]=a2[2]
 a2[9]=a2[3]
 a2[10]=a2[4]
 a2[11]=a2[5]
 a2[12]=[:r]*8
 b2[0]=[1]*8
 b2[1]=[1,1,1,1,1,1,1,1]
 b2[2]=[1,1,2,1,1,2]
 b2[3]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
 b2[4]=[1,1,2,1,1,2]
 b2[5]=[1]*8
 b2[6]=[1]*8+b2[0]
 b2[7]=b2[1]
 b2[8]=b2[2]
 b2[9]=b2[3]
 b2[10]=b2[4]
 b2[11]=b2[5]
 b2[12]=[1]*8 
 c2=[100,120,140,160,180,200,220,200,180,160,140,120,100]
 #calc starting index and any sleep for tied notes for part 2
 sv2=getmatchd(remainingBars,bpba[startSec],b2[startSec])

 puts "2: "+sv2.to_s #print start index and sleep time
 use_synth :blade
 in_thread do
 for i in startSec..a2.length-1
 use_bpm c2[i]
 sleep sv2[1] #sleep for tied note (>0 if tied)
 for j in sv2[0]..a2[i].length-1
 play a2[i][j],sustain: b2[i][j]*0.9,release: b2[i][j]*0.1,pan: p2
 sleep b2[i][j]
 end
 sv2=[0,0] #reset so subsequent iterations of j loop in full and no tied sleep
 end
 end

 a3=[]
 b3=[]
 a3[0]=[:r]*8
 a3[1]=[:r]*8
 a3[2]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
 a3[3]=[:e4,:f4,:g4,:e4,:f4,:g4]
 a3[4]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
 a3[5]=[:c4,:g3,:c4,:c4,:g3,:c4]
 a3[6]=[:r]*8+a3[0]
 a3[7]=a3[1]
 a3[8]=a3[2]
 a3[9]=a3[3]
 a3[10]=a3[4]
 a3[11]=a3[5]
 a3[12]=[:r]*8
 b3[0]=[1]*8
 b3[1]=[1]*8
 b3[2]=[1,1,1,1,1,1,1,1]
 b3[3]=[1,1,2,1,1,2]
 b3[4]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
 b3[5]=[1,1,2,1,1,2]
 b3[6]=[1]*8+b3[0]
 b3[7]=b3[1]
 b3[8]=b3[2]
 b3[9]=b3[3]
 b3[10]=b3[4]
 b3[11]=b3[5]
 b3[12]=[1]*8
 c3=[100,120,140,160,180,200,220,200,180,160,140,120,100]
 #calc starting index and any sleep for tied notes for part 3
 sv3=getmatchd(remainingBars,bpba[startSec],b3[startSec])

 puts "3: "+sv3.to_s #print start index and sleep time

 use_synth :tri
 in_thread do
 for i in startSec..a3.length-1
 use_bpm c3[i]
 sleep sv3[1] #sleep for tied note (>0 if tied)
 for j in sv3[0]..a3[i].length-1
 play a3[i][j],sustain: b3[i][j]*0.9,release: b3[i][j]*0.1,pan: p3
 sleep b3[i][j]
 end
 sv3=[0,0] #reset so subsequent iterations of j loop in full and no tied sleep
 end
 end

 a4=[]
 b4=[]
 a4[0]=[:r]*8
 a4[1]=[:r]*8
 a4[2]=[:r]*8
 a4[3]=[:c4,:d4,:e4,:c4,:c4,:d4,:e4,:c4]
 a4[4]=[:e4,:f4,:g4,:e4,:f4,:g4]
 a4[5]=[:g4,:a4,:g4,:f4,:e4,:c4,:g4,:a4,:g4,:f4,:e4,:c4]
 a4[6]=[:c4,:g3,:c4,:c4,:g3,:c4]+[:r]*6 #Tied note added here: to show how its dealt with start at bars 14 then 15
 a4[7]=a4[1]
 a4[8]=a4[2]
 a4[9]=a4[3]
 a4[10]=a4[4]
 a4[11]=a4[5]
 a4[12]=[:c4,:g3,:c4,:c4,:g3,:c4]
 b4[0]=[1]*8
 b4[1]=[1]*8
 b4[2]=[1]*8
 b4[3]=[1,1,1,1,1,1,1,1]
 b4[4]=[1,1,2,1,1,2]
 b4[5]=[0.5,0.5,0.5,0.5,1,1,0.5,0.5,0.5,0.5,1,1]
 b4[6]=[1,1,2,1,1,4]+[1]*6 #Tied note added here: to show how its dealt with start at bars 14 then 15
 b4[7]=b4[1]
 b4[8]=b4[2]
 b4[9]=b4[3]
 b4[10]=b4[4]
 b4[11]=b4[5]
 b4[12]=[1,1,2,1,1,2]
 c4=[100,120,140,160,180,200,220,200,180,160,140,120,100]
 #calc starting index and any sleep for tied notes for part 4
 sv4=getmatchd(remainingBars,bpba[startSec],b4[startSec])

 puts "4: "+sv4.to_s #print start index and sleep time

 use_synth :saw
 in_thread do
 for i in startSec..a4.length-1
 use_bpm c4[i]
 sleep sv4[1] #sleep for tied note (>0 if tied)
 for j in sv4[0]..a4[i].length-1
 play a4[i][j],sustain: b4[i][j]*0.9,release: b4[i][j]*0.1,pan: p4
 sleep b4[i][j]
 end
 sv4=[0,0] #reset so subsequent iterations of j loop in full and no tied sleep
 end
 end

 end #level
end #fx

At the beginning of the program restart is set as a variable holding the command to run the FrereJaquesControlled-RFauto program referred to above. The next two lines turn off some of the output in the log to make it eaieer to see some of the printed statements the program produces. bpba is an array or list which holds the number of beats per bar for each of the 13 sections in the piece. In this example the time signature 4/4 or 4 crotchets per bar is constant throughout, but the code will handle pieces where the time signatures changes between sections, as is the case in the second example detailed later on. In this case each of the 13 entries is set to 4.
st[ ] is an array which will hold details about the starting  section and the bars to process to determine the starting note. p1 to p4 are the pan positions for each of the four parts.

The first additional procedure is numbeats. This calculates the number of crotchet beats in a given section. Thus puts numbeats(b1[0]) would print 8 in the log window, as there are 8 crotchets in the first section, but puts numbeats(b1[6]) would give 16 as this middle section is twice as long as the others.

The second added procedure startDetails determines which section we need to start playing from, and how many bars remain to be processed from the start of this section in order to determine the index or position of the first note to be played. Although it is listed at this point, to keep the procedure definitions together, it requires some data which can only be ascertained after the note and duration data of the first part have been defined. If you look just below that point in the program listing you will see some code which sets up the list bNumberSecStart  This calculates and holds the starting bar number of each section, but counting from 0 rather than 1. Basically it calculates the number of beats in each section and divides it by the number of beats per bar for that section which is held in the bpba array referred to above. Also in this section of the program the maximum number of bars in the program bmax is calculated, and it is used to limit the requested bar bs if this is too high. Returning to the  startDetails function,  this is fed with three parameters. The start bar requested bs, the array list of starting bar number for each section bNumberSecStart and the array b1 which holds the lists of durations for the 13 sections b1[0] to b1[12]
It calculates which section to start from by subtracting the number of bars from each section in turn from the value of bs, until the remaining number is less than the number in the current section. The starting section is then one less than this number, and the remaining bars are those from which to work out what the starting note index for the bar request is.

This task is now passed to the third additional procedure, getmatchd. This will be passed three parameters. bn is the number of bars remaining to be processed, after removing those which have already taken place in the previous sections, the beats per bar bpb for the current section, and the list of durations for that section (durations) eg b1[6] if we are going to start within that section. matchbeat is set to the target beat to find within the section. The -1 adjusts for the fact that the start bar counts from 1 whereas the calculations we have done essentially on elapsed bars starts counting from 0. We now set up two variables l and x. x indexes the position starting from 0 as we move though the section in a loop while l holds a running total of the duration within the loop. We continue going round the loop until the increasing value of l matches the target beat count in matchd within an error less than the smallest note duration we will use, OR until l just exceeds this target. This latter case will occur if there is a tied note over the target beat. For example if we have two bars each with four crotchets and the fourth crotchet is tied to the fifth one “over the bar line” which is the target start bar so that we sound a minim note, then the match will actually be after the fifth crotchet a the start of the sixth crotchet. In this case, we will start that part playing at the sixth crotchet, but we will insert a rest so that it is delayed and will start playing in synchronism with the other 3 parts. So the procedure getmatchd will return two pieces of information. First the starting note index to be played in the section, and secondly the rest value (if any) required, which will allow for a tied note match.

The final small procedure uses some ruby code to split the returned version number of the Sonic Pi being utilised and enable us to determine whether it is version 2.11 or 2.11.1 and so adjust the OSC sync code as shown previously. In the first part, we wait for an OSC sync to be received containing the play command code tr=1. Once this has been received we proceed to the next part which is now placed in a thread, so that it can continue to operate as the rest of the program proceed on its way. This thread waits for another OSC message to be received, this time with tr set to -1 the stop code. When this is received it uses a system command to call the FrereJaquesControlled-RFauto.rb program using the command line variable restart setup at the start of the program. As discussed, this will cause the program to stop and then rerun, awaiting another play command from the remote GUI.

The remainder of the program is largely the same as the 4 part round we discussed towards the beginning of this article. However there are one or two changes. First the whole program is wrapped in two fx calls. The first with_fx :reverb adds some reverb to the round as it is played and the second with_fx :level sets an overall :amp value of 0.7 for the volume. Below the data for the first part is the extra code to work out bNumberSectStart and bmax as discussed above. The startDetails procedure is called to work out the starting section and the number of bars to be processed for this part. In fact the same data can be used for all four parts in this example so it doesn’t have to be calculated more than once. The results are stored in the list st[ ] with startSec being set to st[0] and remainingBars to st[1] From there on each of the four parts is processed in exactly the same way.

First we use the getmachd procedure to find the starting note index and the sleep value (if any) to compensate for a tied note. This piece wouldn’t normally have any tied notes, but to illustrate what happens I have introduced one at the end of the first tune played in part 4. You will see this in the comments alongside the lines for a4[6] and b4[6]. You can compare them with the corresponding lines in the second listing in the article. We will discuss this further when the program is played. The call to the getmatchd procedure is sv1=getmatchd(remainingBars,bpba[startSec],b1[startSec]), the two values this returns are stored in sv1[0] the starting note index and sv1[1] the sleep value (if any)

Now all we have to do to adjust the starting note is to alter the starting point of the two loops i and j which control the playing of the notes. Instead of for i in 0..a1.length-1 we now have for i in startSec..a1.length-1 and instead of for j in 0..a1[i].length-1 we now have for j in sv1[0]..a1[i].length-1 Also just before the j loop we insert sleep sv1[1] If there is a tied note involved, this sleep value will be greater than 0 and will allow for the fact that this loop is starting a bit later than the other parts. Finally we reset the values in sv1 to 0 after the j loop has finished so that for subsequent sections all of the loop contents are used with j starting from 0 and with no sleep value before the section start. If you look at the code you will see that all the three remaining parts are processed in exactly the same way, each with their own sv values sv2, sv3 and sv4 calculated and applied.

So after that mammoth discussion we can try out the code. Becuase the program is so long it will not run directly in a Sonic Pi buffer. Instead, in an empty buffer you need to type

run_file “~/Documents/SPfromXML/FrereJaquesControlled-RF.rb”

obviously alter the path if you have the file stored in another folder. Make sure that the FrereJaquesControlled-RFauto.rb file is in place as well and that the restart variable is pointing to its correct location, and also that the auto file has the correct location of the FrereJaquesControlled-RF.rb file in it. It is easy to get one of these wrong, so check them carefully. Now start the StartBarSelector GUI and then run the Sonic Pi program. It wont do a lot as it is waiting for a play command from the GUI. Click on the red play rectangle and all being well it should start playing. Click on the red stop rectangle and it should stop AND relaunch the Sonic Pi program. All being well you can click on play again to restart it. If that doesn’t happen look at the debugging section later on. You should be able to play and stop at will. The buttons are highlighted to show which should be pressed next. You should also be able to alter the displayed start bar at any time. using the three rectangles provided. The next time you press play, the displayed value will be implemented. To see how the tied bar works, start from bar 14. You should here the tied note held over and playing at the same time as part 1 restarts for the second time. Now change and start from bar 15. You will here part 1 starting, but the second half of the tied note in part 4 is replaced with a sleep command and part four starts from next note (which is a rest) so you will here nothing from part 4 until it comes in (at the correct time) for the second time through. you can have a look at the screen output where you will see

“BS selected is 15”
“Total number of bars=28.0”
“Start Section=6”
“Remaining Bars to find starting index=3.0”

“1: [8, 0.0]”
“2: [8, 0.0]”
“3: [8, 0.0]”
“4: [6, 2.0]”

You will see that part 4 starts section 6 with an index of 6 with a sleep value of 2.0 corresponding to the overrun of the tied note, whereas the other three parts have 0.0 sleep value. Part 4 starts playing with index 6 which is the seventh entry in section 6, on the third beat of the third bar, whereas the other parts which all have  8 crotchet rests at the start of section 6 all start on the 9th entry (index 8) at the start of the third bar in that section. The sleep 2 will delay part 4 so that it starts the third beat in synchronism with the other three parts when they get there.

a4[6]=[:c4,:g3,:c4,:c4,:g3,:c4]+[:r]*6  #Tied note added here: to show how its dealt with start at bars 14 then 1
b4[6]=[1,1,2,1,1,4]+[1]*6   #Tied note added here: to show how its dealt with start at bars 14 then 15z

For comparison part 1 without the tied note is shown below

a1[6]=[:r]*8+a1[0]
b1[6]=[1]*8+b1[0]

Debugging
It can be quite tricky to sort things out if the system doesn’t work. From my experience the usual culprit is an incorrect path or filename in the places where these are included in the scripts. Check very carefully the saved names of the two programs FrereJaquesControlled-RF.rb and FrereJaquesControlled-RFauto.rb and the places where these are referenced in the main program and in the auto program.
You can check the behaviour of the auto program by running it from the  command line with
/usr/bin/ruby ~/Documents/SPfromXML/FrereJaquesControlled-RFauto.rb
You can check the StartBarSelector GUI with the test program detailed in the article. You can also run the GUI from the processing ide, and uncomment the debugging print statements in the program which will give you some output in the terminal window beneath the program.

You can down load all the programs in the project including the source file for the GUI (in text format) which you can paste into a new blank sketch window, and then save it as StartBarSelector. The link is here

Finally, there is also a second example playing a slightly longer piece by Monteverdi (Beatus Vir) which can also be controlled by the GUI. It incorporates a time signature change at bar 62 which is also handled by the software. Try starting two bars earlier to hear the time signature change take place from 4/4 to 6/4 time.
The files for this second example BeatusVirControlled-RF.rb and BEatusVirControlled-RFauto.rb are included with the link above.

I have recorded a series of 6 videos which you can watch which go through the operation of these  programs. You can access them here.

Using Processing to control Sonic Pi

Having beeen inspired by the superb Christmas Card from MeHackit (do look at it and download and play with the code) I resolved to take another look at Processing and how it can be used to control Sonic Pi. Previously I had only used it to run a conversion script to convert MusicXML files to Sonic Pi format, but seeing this Christmas Card showed that it is capable of far more. As a newbie to using the program in earnest I looked at some of the numerous examples at https://processing.org/examples/ and saw how easy it was to get information on mouse coordinates. I choose the example constrain which has a filled ellipse follow the mouse coordinates, but bounded by an enclosing box and decided to modifiy this to send coordinates to Sonic Pi using the sync command osc features added in Sonic PI 2.11. With reference to the MeHackit code, it was easy to add OSC commands to send the mouse x and xy coordinates to sonic pi, where they could be received and scaled to control the note pitch and cutoff values for notes played with the tb303 synth. This is just an example. You could control any parameters you wish, or add detection for mouse down as well in a more complex example. The processing script used is:

import oscP5.*; //libraries required
import netP5.*;

OscP5 oscP5;
NetAddress sonicPi;

float mx;
float my;
float easing = 1; //change to 1 to get immediate following
int radius = 24;
int edge = 100;
int inner = edge + radius;

void setup() {
size(640, 360);
noStroke();
ellipseMode(RADIUS);
rectMode(CORNERS);
oscP5 = new OscP5(this, 8000);
sonicPi = new NetAddress("127.0.0.1",4559);

}
void sendOscNote(float mx,float my) {
OscMessage toSend = new OscMessage("/notesend");
toSend.add(mx); //add mx and my values as floating numbers
toSend.add(my);
oscP5.send(toSend, sonicPi);
println(toSend);
}
void draw() {
background(51);

if (abs(mouseX - mx) > 0.1) {
mx = mx + (mouseX - mx) * easing;
}
if (abs(mouseY - my) > 0.1) {
my = my + (mouseY- my) * easing;
}

mx = constrain(mx, inner, width - inner);
my = constrain(my, inner, height - inner);
fill(76);
rect(edge, edge, width-edge, height-edge);
fill(255);
ellipse(mx, my, radius, radius);
sendOscNote(mx,my); //send the mx and my values to SP
}

To use it, install processing 3 from https://processing.org/ and paste the script into the sketch window which opens when you run it. Save the sketcch with a suitable name and location. YOu need to add the oscP5 library from the sketch=>import libarary…=>add library menu selection. YOu can then run the sketch and the ellispe should follow the mouse around inside its rectangle.

On the Sonic Pi side paste in the code below and run it.

use_synth :tb303
live_loop :os do
  nv=sync "/notesend"
  #puts nv #uncomment and comment next line to see OSC input
  #scale the mx and my values in nv[0] and nv[1] appropriately
  #raw mx 124(left)-516(right) and raw my 124 (top)-236(bottom
  puts (40+(nv[0]-124)/392*60).to_i.to_s+" "+(190-nv[1]/2).to_s
  play (40+(nv[0]-124)/392*60).to_i,cutoff: (190-nv[1]/2),sustain: 0.04,release: 0.01
  sleep 0.05
end

Run the SP program which will wait for input from teh Procdessing script. Run the Processing script and move the mouse around to alter the position of the filled ellipse (circle). Moveing it left-right will alter the pitch of the note from 40 to 100. Moving it up and down will alter the cutoff from 72 (bottom) to 128(top). Note it is possible to slice the note by moving the mouse bottom right where the pitch 100 is significantly above the cutoff value 72 so you hear nothing.

I hope that this simple example will inspre both you and I to explore further the use of Processing with Sonic Pi.

Here is a link to a video of the files in action

Sonic Pi plays Mozart’s Requiem in D Minor

mozart-small

Ever since the very early days of using Sonic Pi version 1, I had aspirations to see just how well the program might be used to give authentic renditions of pieces of Classical Music. Over the three and half years that have intervened, this has slowly become more achievable, both as the program has matured and had extra features added, and as my own experience of using various techniques with the program has matured. This last year, and indeed last six months has perhaps been the most exciting with improvements and new techniques allowing me to attempt the rendition of large scale musical works using Sonic Pi. You may well ask why do it in the first place, as other programs such as Sibelius, Logic Pro, Ableton and MuseScore2 can already go much further, and give more accurate renditions. I suppose it is because of the challenge, and because few others have explored this in depth with Sonic Pi, and I get a real buzz from listening to a piece, for which I have created the code (if not the music itself) for Sonic Pi to play. This is specially so when the music can usually be played on a Raspberry Pi, as well as its more powerful Apple or Windows big brothers, using Sonic Pi. I also have enjoyed preparing baroque music and earlier genres in particular. There is something rather cool about playing a piece purporting to be written by Henry Eighth on Sonic Pi, and of course, there are not many copyright problems in doing so!

The basic thing that Sonic Pi does is to play a note, of a specified pitch, for a specified duration, and with a specified timbre. Everything else has to be built up from there. Subtleties can be added by adjusting the envelope of the sound: how it is attacked, and how it may decay away. Sonic Pi also allows various effects to be applied, the most useful of which for Classical Music purposes is reverberation, which can give a feeling of depth and spaciousness. However, it is daunting to see how to tackle a large scale work with hundreds if not thousands of notes, and perhaps, for a reasonable sized orchestra, 20 different parts, played by a variety of instruments. Initially I tried smaller works, with fewer, perhaps two – four instruments, and of shorter duration. By choosing works carefully, many instruments could be reasonably rendered by means of the built in synths, particularly :tri and :beep and :saw, and later, by the welcome addition of :piano. However on the timbre side the thing that really allows you to get a more realistic rendition of classical instruments is to use sample based instruments, As I have discussed in previous articles on this site you can get a very reasonable rendition of some instruments simple by using a single sample, and playing it at different rates to achieve different pitches. Using the built in :ambi_glass_rub sample which plays a :fs5 note, give a pleasant sounding instrument, when played over a variety of rates, and using the rpitch: option it is fairly easy to play any pitch you want. Things get even better when you look at some of the collections of samples that have been made of orchestra instruments, some of which are free to use. I chose to utilise the Sonatina Symphonic Orchestra samples, (although there are now potentially better sets of samples), because it was moderately easy to develop one or two defined function in Sonic Pi to let you play notes using the range of instruments it covers, just in the same way that you would use a built in synth. Over the last couple of years I have refined  these functions so that they could handle chord playing as well, and also dealt with the problems that can arise in loading large numbers of samples, and I now use these routinely when developing Classical Music for Sonic Pi. For the future it would be good to develop the ability to use .sfz file formats directly to play easily any sample based instrument defined in that way. What I have done for the Sonatina library is to work specifically with the samples which it incorporates, but side stepping the .sfz files it also defines.

The second major hold up to tacking large scale pieces is the sheer number of notes which need to be entered into the Sonic Pi code. This also quickly brings you up against the size limitation for the length of code possible in a single Sonic Pi buffer. Both of these problems have been reasonably solved in the last six months. The size limitation is solved by means of the new run_file command introduced in Sonic Pi 2.11 Instead of running code from a Sonic Pi buffer, it lets you run it directly from an external text file without the length limitations. The number of notes problem, is made easier by means of a great script I found, written by Japanese Sonic Pi user Hiroshi Tachibana, using the program “Processing” The site was in Japanese but I managed to work out how to use it, and made some slight adjustments to it. It also got me using the free program MuseScore 2, which is a music notation program, which can both enter and display musical scores, but also play them using midi, upon which it is based. It has the advantage that it is easy to edit the music, and to create individual parts from a score. Using the program it is possible to take a complex musical part, for example a piano part, and to split it up into several simpler parts which Sonic Pi can play, and which, when played together sound just like the original complex part. Thus the Kyrie from the Mozart Requiem which has 18 scored parts, in fact requires 26 separate parts to play in Sonic PI. Using MuseScore2 and an existing midi score for a given piece, you can edit the parts in MuseScore, and extract them in MusicXML format to allow them to be used with the Processing Script, and each run of the script will produce code for each of the 26 parts. These can then be pasted together into a (large) text file. The processing script is set up to produce code that is ready to play with a built in synth in Sonic Pi, using the play command.The notes can be retained, but if you want to play the parts with Sample based voices, you then have to substitute the code used to play them. Luckily, once you have worked out how to do this once, using my functions which access the Sonatina Symphonic Library, you merely have to specify which instrument to use, and make sure that the relevant samples are preloaded at the start of the program. I have written three articles on using the processing script which you can access here

The most difficult part to producing a reasonable performance remains. That is to add some dynamic variation (and possibly tempo variation),as specified in the composer’s score. In principle it is easy to alter the dynamics. You simply alter the  amp: level of a note. In practice, to do this for every note would add a third as much data again: note pitch, note duration and note amplitude. Instead I usually utilise the fx :level command. by using the structure below, you can alter the volume of a part at specified times, and even allow for crescendoes and diminuendos by using amp_slide: parameters. The example shows the level control applied to the wind parts in the Benedictus of the Requiem. wff,wf,wm and wp are the dynamic levels to be used, and the control function, switches between these at various bars during the performance. The note playing section would follow, inside the with_fx structure, and thus affected by it. The in_thread structure, means that the control takes place alongside the note playing and thus affects the volume of the notes, giving the desired dynamics.

with_fx :level do |vw| #wind level
  in_thread do
    use_bpm 51
    wff=1;wf=0.8;wm=0.6;wp=0.4
    control vw,amp: wm
    sleep 12
    control vw,amp: wp #b4
    sleep 58.5
    control vw,amp: wff #b18+2*c+q
    sleep 9.5
    control vw,amp: wp,amp_slide: 4 #b21
    sleep 64
    control vw,amp: wf,amp_slide: 4 #b37
    sleep 5.5
    control vw,amp: wp,amp_slide: 0 #b38+c+q
    sleep 33
    control vw,amp: wf #b46+2c+q
    sleep 14
    control vw,amp: wff #b50+q
    sleep 15.5
    use_bpm 167
  end

  #note playing section follows....
end #of with_fx :level

The setup process is quite tedious, as you have to calculate the gaps (in number of crotchet beats between the various level changes, but it works quite well. I also find that you can nest fx_level changes and their effects multiply together, so I also use overall fixed level effect sections to adjust the relative strengths of different sections of instruments.

There remain further challenges to improve the accuracy of the final sounds. one thing I have not tackled yet is to come up with an easy way to change the style of playing, from for example legato to staccato. You can do this for individual notes by adjusting the ADSR envelope, but it is difficult to do so other than as a global setting for a particular instrument part, without  holding a lot of extra data. At present I set the envelope for a part and it remains constant throughout a movement, and consequently I ignore one or two staccato notes. Something like a with_fx :envelope command might help, offering a similar solution to the way in which the with_fx :level command can be used. Another problem can be very long sustained notes. One or two of the samples I use are not long enough to cover the required length of note, and so it either fades out, or in some cases I repeat the note, which means that it doesn’t sustain through exactly as intended in the score. Again on timing, it is quite difficult to do a ralentando if the parts are all moving and changing pitch at different times. Consequently the ends of some of the movements at present are rather more abrupt than I would like. I have recently written some code which can produce a more realistic ralentando, but It would require some manual changing of the code, replacing parts of the duration lists of notes with new calculated values. I have asked the question whether a bpm_slide command might be a possibility in Sonic Pi, but I think it will present some problems to implement. Also the more processing that is loaded into the program, the harder it becomes to enable the more modest computing power of a Raspberry Pi to be able to play it. Another problem illustrated by this piece is the way in which the choral parts are represented. I use a chorus “wah” sample, which lets you hear notes with some articulation at the beginning, but of course you get nothing of the words. I have already seem some mention in connection with Sonic Pi of looking at vocoder technology whereby words can be generated and sung at different pitches. There are already apps to show this, but I think there is a long way to go before the technology might be useable on Sonic Pi.

So to finish, I hope that you enjoy my attempt at rendering the Mozart Requiem on Sonic PI. It has been challenging but fun to do, and I find the end result quite pleasing, albeit imperfect and capable of further improvement.

You can see a youtube video of the completed Requiem here

A journey with Sonic Pi

Recent comments on the sonic pi google group got me thinking about my own journey with this incredibly versatile program.

I first came across Sonic Pi on version 1, img_5301but on my first visit to the program I think I played with it for about 5 minutes and thought h’m this is quite nice, but is rather limited in what it can do, particularly within the limitations of the original Raspberry Pi on which I was trying it out. I did however persevere, and returned to it for further experimentation. In those early days there was little documentation, apart from a cheatsheet (pdf) which had one or two very brief examples of some of the commands available, which weren’t that many. At that stage I had never come across github, and certainly had not looked at all at the coding of Sonic Pi. I had also never used Ruby, but I had over the years been very interested in programs that let you generate music, something that I had started doing with a Nascom kit computer in the 1970s and later with an early Apple Computer and with the BBC Micro. On that machine, things became quite advanced, and I used a tape based program called The Music System (TMS) which I converted to run on an Econet network, and on which using note input directly from a keyboard was able to get 24 BBC computers playing a rendition of the last movement of Bach’s Brandenburg Concerto no 2. The machines were triggered to start via the network, and ran on their own thereafter, remaining more or less in sync until the end of the piece. (Something not yet realised on Sonic Pi 35 years or so later) Here is a video of TMS playing a Rondo by Mozart I found

Later I used programs like Sibelius and Logic Pro on Mac computers, but although these enabled you to enter and play music, they lacked the buzz of working with the early BBC TMS program. On the Raspberry Pi I started making music using Python together with the lilypond music engraver program, and the Timidity midi player. This brought back some of the buzz, and I enjoyed working with Mozart minuets with the bars selected by throwing dice.

When I started to play with Sonic Pi version 1 around Summer 2013, I experimented with the commands and the synths available, and had fun particularly with the FM synth, trying out different parameter settings experimentally in the absence of much documentation and making some musical noises utilising various counting loops. At the same time I was learning basic Ruby constructs, and the midi numbers associated with different notes. A midi number/note chart became a permanent feature as an image displayed on my desktop. Good as it was to experiment in this way, my interest has always been in trying to play classical pieces by electronic means. This also meant that I had to explore how to play more than one part at a time. I began to explore the use of threads. In version 1 these were not all that satisfactory, and the synchronism sometimes began to drift if the threads were too long. However it was possible to start playing simple piano pieces using the (then) :saw_beep synth, and looking at my archives I’ve dug out several pieces including a version of Eleanor Rigby, and a Bach Two Part Invention both dated October 2013, and  stuck them on my original Pi. They sound OK, although the timing is sometimes a bit wobbly because they are both running near the limit of what was then obtainable. Also a piece I called Strange noises, which explored the use of the FM synth which I mentioned above. At this stage I was also beginning to get to grips with github and learning how to download and build Sonic Pi. You can find some of these files in Sonic Pi 1 format here Note they WILL need some amending to play on Sonic Pi 2.x

Sonic Pi really started to get interesting, and to afford greater possibilities as far as Classical Music was concerned with the advent of version 2.0. Because I was by this stage au-fait with downloading and building the source, I was able to use many of the features of this for several months before it was officially released on 2nd September 2014, and in fact, ever since I have been utilising the cutting edge version of Sonic Pi, and have built it for both Raspberry Pi, Mac, Windows and Linux (Ubuntu). This in itself has been quite an education, especially getting to grips with building on Windows, and I have learned a lot about the procedures involved in the process. In the process I have gained a fairly good understanding of how Sonic Pi is put together, and how the code of the various parts operates. This has also been very useful in becoming more confident in building other packages, unrelated to Sonic Pi, and in debugging some of the problems, for example when I added an IQAudio interface, there were initially problems which prevented it from working, and even when the driver for the i2s interface was made compatible with jackd the program which interfaces audio output to Sonic Pi, there were further problems to get it working satisfactorily.

However, back to Sonic Pi. With the advent of the 2.x series, there were immediately some code breaking changes, but it was possible to amend my existing code, to accommodate the changes, which were all to the good. One immediate benefit was that the timing system made it possible for threads to work properly, and so it was much easier to produce polyphonic music that played accurately. Another benefit was the introduction of musical notation, which made it much easier for me (as a classical musician) to enter note data into Sonic Pi. The introduction of better envelope control, initially Attack Sustain Release, but in later versions full Attack Decay Sustain Release made it easier to control note shapes and durations. By this stage I had come across the Ruby zip command which enabled me to join or zip together lists of notes and associated durations and then to use .each notation in a loop to process each pair in playing a particular note. This enabled me to play passages of legato notes, and by using the ADSR envelope to alter the degree to which the notes were legato or staccato.

The next major headway in trying to accurately play classical music came with an investigation of using samples to generate notes. Now parameters such as rpitch make this fairly easy to do, but initially I played with the ambi_glass_rub sample built in to Sonic Pi. This had a fairly pure defined pitch of :Fs5 (F sharp in octave 5). By playing it at rate: 2 you could go up an octave and at rate: 0.5 down an octave. At first I found other notes by experiment, but later realised that to move from one semitone to the next you merely multiplied the rate by the twelfth root of 2. Thus if you do this 12 times, you end up increasing the pitch by 12 semitones or one octave resulting in( 2**(1.0/12)) **12 = 2
i.e. a doubling of the initial rate.

I wrote a little procedure definition to enable me to play any specified note accurately at pitch, and used this sample based “synth” to play an Adagio by Mozart which he wrote for a Glass Armonica, an instrument in which wet glass disks are rubbed to produce a sound, just as you can with your finger on a wine glass. I had just been to Brussels where I had seen this instrument in the superb Musical Instruments Museum. I did several other projects around this time, including a peal of Church Bells, and a harp playing some Bach. I came across a collection of orchestral samples entitled Sonatina Symphonic Orchestra, and I began to use samples from these to produce realistic instrument sounds. Some of the first efforts, particularly to produce a Piano sample based voice (this was before the piano synth was added to Sonic Pi), were a bit tedious, involving experimentally based settings for the rate multipliers to use. Six months later I revisited this sample source and wrote a comprehensive set of procedures to enable up to 55 different instruments to be utilised as sample based “synths” for Sonic Pi. (further modifications added later) However, this also highlighted a problem which had become increasingly exasperating for me in using Sonic Pi. Because of the mechanism used to enable the three major parts of Sonic Pi – the GUI (Graphical User Interface which you see) the Ruby based code which processes the code in the buffers and sends it to the third part, the SuperCollider synth (scsynth) which produces the sounds you hear –  to communicate internally there is a maximum character count that you can have in a single buffer, which if exceeded prevents Sonic Pi from working. In the early days of version 2.x some work was done to try to eliminate this by using an alternative mechanism (TCP instead of UDP) to allow this communication to happen. Although it worked, and eliminated the limit, it was not fast enough, particularly on the Pi, and following some further upgrades to Sonic Pi code, it has not kept pace and is currently not useable. This led me to explore ways to overcome this limitation and I discovered that the cue and sync system in Sonic Pi would actually work between different buffers. So the work around was to put most of the procedure definitions for a piece in one buffer, and the cue a second buffer to run (and sometimes a third!) once the first buffer had been “run” Sometimes some code had to be duplicated in the tow buffers, but it was possible to run pieces that were too long to work from a single buffer. It made it a bit more complex to set things up, and you had to start the second buffer running (waiting for a cue) and then the first (which provide that cue when it had finished its run) to get things to work.

In April 2015 Sam Aaron added an interface to allow the Sonic Pi version of Minecraft to interact with Sonic Pi. Although I am not very interested in Minecraft per se, I enjoyed writing some code to produce various patterns, shapes and even buildings, and have them displayed with a musical accompaniment from Sonic Pi. Again for some of these I used two buffers, and in particular in a project in which a display of exploding fireworks (in Minecraft) was accompanied by Handel’s Fireworks Music, playing in Sonic Pi.

Every now and then I looked more towards using live_loops in sonic Pi, and I enjoyed doing work on producing percussion rhythms, and various other projects outside the classical music genre. You can delve through my SoundCloud  (2nd SoundCloud site) and gist repositories to see some of these. I also did some projects which involved using Sonic Pi with other devices or software. Thanks to Nick Johnstone’s superb sonic_pi_cli gem which enables you to send commands from a command line, I produced projects to build a Sonic Pi jukebox, which could be controlled either by a ruby program, or by a smart phone using the Telegram app, or in a separate project by the Raspberry Pi Touch Screen, and a project to interface Sonic Pi with Pimoroni’s Flotilla Project I also revisited the Mozart Dice Game and with the aid of an external Ruby program that interacted with Sonic Pi, produced a version where Sonic Pi generated the pieces, and played them whilst displaying the music on a Web-browser on the Raspberry Pi. More recently I did some work interfacing a midi-controller and a computer keyboard to play on Sonic Pi in real time. This has many “warts” but was fun to do, and lately several others have been doing useful work in this area. I also did some projects which utilised Sonic Pi as a source to drive led displays, particularly with 4Tronix McRoboFace

Returning to Classical Music, the next two things which have given great impetus to being able to play significant works on Sonic Pi have been the introduction of the run_file command in Sonic Pi 2.11, which overcomes the size limitation discussed above, by sending Sonic Pi code directly from a file rather than from one of the built in buffers. It has the drawback that the code is not directly visible, and debugging can be a bit harder, but also has the great advantage that more extensive pieces can be played on Sonic Pi. The other main drawback was the large amount of time required to enter note/duration information into Sonic Pi. Even although I have become fairly proficient at doing it, it still was a major part of the time required to prepare a piece for Sonic Pi to play. However, there is a large body of classical music which is available in midi format and which can be displayed on free software such as MuseScore. In looking to see what might be involved in converting midi format files to Sonic Pi code format, I came across an article by Japanese user Hiroshi Tachibana in which he had written a script for the program “processing” which was able to parse midi files containing a single instrument part, which could include chords, provided that they didn’t include tied notes, into Sonic Pi code. Although the description was in Japanese, I managed to get this script working, and then modified it slightly so that it could be used to build up the parts for many instruments in a piece, and thus make it easier to convert pieces from midi to Sonic Pi. In the process I learned how to use the program Muse Score, which has become indispensable in modifying the parts for a classical piece which don’t fit the constraints of the script so that they can be converted. This mainly consists of duplicating complex parts and then sharing the notes between the two ( or sometimes three or four) duplicate parts so that each part obeys the rules and can be converted. For example in a recent conversion I had to split a printed part from one instrument into six different parts, some only containing perhaps two or three notes, so that each part could be converted, and when they were played together in separate threads, they still sounded the same as the initial source part. The other reason for using MuseScore is that it can export parts in various formats, including midi and MusicXML which is the required source format for the conversion script to work. Having used this system for two or three months now, I have developed a work flow which makes it quite easy and not too time consuming to convert even large complex scores to work with Sonic Pi. One I was particulary pleased with, and which has been an ambition for many years, was to convert the score of a large (23 minute) double fugue for string orchestra, written by my late Father, who was Professor of Music at Edinburgh University, so that it could be played successfully by Sonic Pi. One or two of these pieces sound fine with built in synths in Sonic Pi, but I have also combined the conversions with my work to produce sample based “synths” for Sonic Pi using the Sonatina Symphonic Orchestra samples, and some other ones besides, to produce a more realistic end effect.

As the techniques for producing these classical pieces have become better, I have also been trying to take more care to render appropriate dynamics (louds and softs) to the music. In many of my more recent conversions, a large part of the time is taken in adjusting the sustain release settings for the various instruments, the use of reverb or gverb to make it sound more realistic and the panposition of the instruments to help the stereo image, and the use of techniques such as the fx_level command to enable dynamic variations to be added to the parts. This can be very time consuming, but is generally easier than having to add individual :amp parameters to every note.

Currently I am working through Mozart’s Requiem in D minor and converting the various movements for use with Sonic Pi. This requires quite a lot of time to adjust the relative emphasis given to each part, and also has the challenge of trying to represent vocal parts, using two sets of vocal sound samples. Although by no means perfect, I get great satisfaction from being able to produce something that can be played from a Raspberry Pi3, and which gives a pleasing rendition of such large scale classical works. Nothing will sound quite like it, and to be able to create such nice sounding music in this way is more than recompense for the time taken to do so. And when I think of what is involved: selection of a suitable source, preparation and checking of individual parts, converting these parts from musicXML to Sonic Pi code, building up the parts in a text editor, and assigning musical instruments voices to each part (sometimes up to 30 distinct parts so far), checking the parts for accuracy and doing any debugging, adjusting envelopes and adding a “track” for dynamic changes: – it all seems very satisfying.

sp2-11Sonic Pi 2.11 in action, with new Scope interrface

And so to the future. Already SP2.12beta is underway. Who knows where this will lead. Whatever happens, I am sure there will be much to continue my interest in utilising all that Sonic Pi has to offer, both in programming, experimenting with music code, trying out additional interfaces and much more. It is an amazing program. It can interest an 8 year old trying to pick out a simple tune using only note x, sleep y combinations of commands, to experimenting with a simple rhythm track and building up some live coding, to myself (nearly 69) who finds it a much more satisfying medium to work with, even than Logic Pro which I have on my Mac. I know there are many out there who feel likewise. If you have not tried out Sonic Pi, then I would urge you to have a go. I am sure you will not regret it, but be warned. It is highly addictive!

Perhaps I should end by saying that there is a fantastic community of Sonic Pi users. I have enjoyed meeting some and conversing with many. I am very grateful to many who have helped me in so many areas of computing connected with Sonic Pi, and with the music that many have shared. However I think the biggest accolade must go to Sam Aaron which has produced a truly phenomenal program in Sonic Pi, which has had an amazing impact upon so many.
Here he is in action

 

Boot to Sonic Pi from a USB stick (no SD card) on Pi3

Recently I ran up an image on a USB stick which can be used to boot a Pi3 (NOT Pi2 or earlier) straight into Sonic Pi 2.11, without needing an SD card. This is based on excellent documentation on the Raspberry Pi website at https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/msd.md and I claim no credit for this whatsoever other than having tried it out successfully.

Note this is still an experimental procedure to a certain extent, although with the USB drive I specify below I had no problem in getting to to work with two different Pi3. Your Pi3 will still be bootable as normal from a standard SD card after you have carried out this procedure.

In order to set this up you will need the following:
1 A Pi3 (it will not work on a Pi2 as it uses bootcode built into the Pi3)****
2 A freshSD card on which to build the image which will be transferred to your USB stick. I used a 16Gb Sandisk Ultra miscroSDHC
3 A usb stick. I used a Sandisk Cruzer SDCZ50 16GB from Amazon for £4.99 (not every usb stick will work. The article give a list of some which have been tested)
4 An internet connection for your Pi3

*** It is possible to change this to boot from a Pi2 or earlier, using just one boot file on an SD card. Thereafter the USB stick takes over. See https://www.raspberrypi.org/documentation/hardware/raspberrypi/bootmodes/ for details.

Follow the details of the first part of the article to build up your SD card image Use the Raspbian image dated 2016-09-23 from raspberrypi.org/downloads  Before the second stage, where you transfer the image to your USB stick, install Sonic Pi 2.11 (just released) onto the SD card. You can get it from
http://sonic-pi.net/files/releases/v2.11.0/
download the file sonic-pi_1-2.11.0-2_armhf.deb
I downloaded this via the Chromium browser on the Pi3 to the Downloads folder
Then I installed it with:-

cd ~/Downloads
sudo dpkg -i sonic-pi_1-2.11.0-2_armhf.deb
sudo apt-get install -f

You may find it then lists some redundant packages which you can remove as suggested with

sudo apt-get autoremove

I also added dtoverlays for my IQaudio amp to config.txt using sudo nano /boot/config.txt

dtoverlay=iqaudio-dacplus,auto_mute_amp
dtoverlay=i2s-mmap

which you can do also if you have one, or else any overlays required for an alternative eg HiFiBerry card.

In order to add an auto boot into Sonic Pi you do the following.

mkdir -p ~/.config/autostart
cp /usr/share/applications/sonic-pi.desktop ~/.config/autostart

If you have an IQaudio (or alternative audio card) installed on your Pi3, you should select it as the default card in the Audio Device Settings on the Menu -> Preferences tab

Once you are happy with the SD card, you should complete the second section of the article setting up the USB stick with parted as discussed in the raspberry pi boot documentation article.

All being well, once you have finished you should be able to boot your Pi3 from the usb stick you have set up. Note it does take some time to boot, as it first checks to see if you have an SD card installed, and then switches to search for a USB card after a 5 second timeout. You will see a large rainbow coloured square on your monitor during this process.

I advise you to remove the line added to config.txt

program_usb_boot_mode=1

which was added to program the One Time Programmable memory (OTP) in your Pi3 to enable USB booting. This is an irreversible process, and you may not wish to inflict it on any other Pi3 that the card is subsequently used to boot, although the Pi3 can still boot normally with an SD card even if this has been programmed.

The original github page for the procedure is at
https://github.com/raspberrypi/documentation/blob/master/hardware/raspberrypi/bootmodes/msd.md

and there is more useful information in the parent folder (bootmodes)

This project requires some care to follow the procedures accurately, and it takes a little time to transfer the card contents to the USB stick. Although the boot time is quite long, the performance thereafter is not noticeably different from that of the SD card. Apparently you can add a resistor to one of the GPIO pins to speed up the booting by getting it to ignore the SD card, but I haven’t managed to find out documentation as to which pin is involved. You can also leave a blank SD card in the Pi3 which will also speed up the boot time, by eliminating the timeout wait.