Dual Sensor Theremin for Sonic Pi

Previously I built a theremin based on a modified design from The Magpi magazine, which is now also the subject of a project on the raspberrypi.org site here


Alexandra Stepanoff playing a Theremin in 1930. (from Wikipedia Theremin article)

While this is a great little project, I was not very satisfied with the end result which didn’t sound very similar to a “real” theremin. The original theremin was an analogue instrument in which the audio frequency was produced by the interaction between two radio frequency oscillators at very nearly the same frequency which were controlled by the proximity of a musicians hand to a metal rod aerial. A third oscillator controlled by the proximity to a metal loop of the musicians other hand was used to control the volume. this gave a sound which was continuous when started by the proximity of a hand, whose frequency and volume could be varied smoothly. You can see an example in the picture above (from the Wikipedia article on Theremin)

The first thing I resolved to do was to use two ultrasonic sensors instead of one. I used the same circuit as the one in the raspberrypi.org article but built it twice on the same breadboard. The first circuit used pins 4 and 17 as in the original, and the second one, used pins 23 and 24 for the trigger and echo signals instead, otherwise being identical. I built the circuit on a RasPiO ProHat board which gives convenient access to all the GPIO pins, as shown in the photographs below.

To read the inputs from the two ultrasonic sensors a python script is used. The starting point for the python script is to extract data from the two ultrasonic sensors using the excellent gpiozero library, included by default on the latest Raspbian Stretch image. This makes it very easy to get a series of readings from the two sensors.
In order to send the data to Sonic Pi an OSC message is used, using the python-osc library. You can install this using:

sudo pip3 install python-osc

The complete code for my python3 script is shown below:

#!/usr/bin/env python3
#program sets up two distance sensors and exports readings using OSC to host port 4559
#using gpiozero and python-osc libraries
#written by Robin Newman, May 2018
#
from gpiozero import DistanceSensor
from time import sleep
from pythonosc import osc_message_builder
from pythonosc import udp_client
import argparse
import sys

def control(spip):
    sensor = DistanceSensor(echo=17, trigger=4,threshold_distance=0.5) 
    sensor2 = DistanceSensor(echo=23,  trigger=24,threshold_distance=0.5)
    sender = udp_client.SimpleUDPClient(spip,4559)

    while True:
        try:
            r1 = (sensor.distance* 100) #send numbers in range 0->100
            r2 = (sensor2.distance* 100)
            sender.send_message('/play_this',[r1,r2]) #sends list of 2 numbers
            print("r1:",round(r1,2),"r2:",round(r2,2))
            sleep(0.06)
        except KeyboardInterrupt:
            print("\nExiting")
            sys.exit()

if __name__=="__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--sp",
    default="127.0.0.1", help="The ip onic Pi listens on")
    args = parser.parse_args()
    spip=args.sp
    print("Sonic Pi on ip",spip)
    sleep(2)
    control(spip)

You can type this in using text editor from the Pi Gui Accessories Menu, or using the nano editor from a terminal window. Alternatively you can download it from my gist site (link at the end of of the post). To make it easy to start, when you have it installed on your Raspberry Pi set it to be executable by the pi user by typing (in a terminal window):

chmod u+x theremin.py

I have recently received my Kickstarter Pi_Juice board after sdveral years wait :-) and I decided to try it out with this project, as it would mean that the sensor setup: RasPiO board, Pi-Juice board and Raspberry Pi were completely self contained. That is why I added the optional input parameter, so that I could send the OSC messages to an external computer on the local network, instead of just using it on the local Raspberry Pi. Of course if you don;t have a second suitable computer on which to run Sonic Pi, you can do all of this using the built in Sonic Pi on Raspbian Stretch (version 3.0.1), although you will need to have a screen and keyboard and speaker attached to your Pi to do so. (It IS possible to run Sonic Pi on a headless Raspberry Pi using xdrp, which I have done on other projects, but I don’t want to complicate this post by giving details here). If you are going to use a separate computer for Sonic Pi, then first make sure that your Raspberry Pi is set up to connect automatically to your network via WiFi. (I used a Pi3 with built in Wireless). Then you can use the Raspiconfig utility on the gui preferences menu to set your Pi to boot to the command line (logged in). You should also enable SSH using the same utility. When you do so, you should change the Raspberry Pi password for user pi from the default raspberry to something else you can remember. Again the same utility will let you do that. Note the ip address of your Raspberry Pi on your wireless network, so that you can connect to it remotely. If you don’t want to do this, you can of course leave the Pi connected to screen and keyboard and just start the theremin.py script directly from a terminal window.

All being well if you reboot your Raspberry Pi (you can do it with the screen and keyboard attached first time if you like to check that it is working) you should be able to connect to it from your second computer running Sonic Pi. I used a Macbook Pro, with ip address 192.168.1.134 and my Raspberry PI was on 192.168.1.182, so I typed:

ssh pi@192.168.1.182

on my Mac terminal window, followed by the password for user pi on my Raspberry Pi. This connected me, and since I had saved the theremin.py script in my pi home directory I then just had to type this to get the script running (remember it was set to be executable):

./theremin.py --sp 192.168.1.134

This started the script running, and it reported in the terminal window.

Sonic Pi on 192.168.1.134

Then if I started moving my hands back and foward near the ultrasound sensors I could see streams of numbers beneath, shown that the readings were geing generated. e.g.

r1: 6.69 r2: 6.45
r1: 6.69 r2: 6.45
r1: 9.28 r2: 6.45
r1: 9.28 r2: 6.45
r1: 9.26 r2: 6.49
r1: 9.26 r2: 6.53

That completes the setup for the Raspberry Pi. Moving on to Sonic Pi, this should be started on your second computer (in my case my Macbook Pro). I was using version 3.1 the latest release on this platform. In an empty buffer screen (selected using the tabs at the bottom of the main screen, you should add the following program, either by typing it out or by downloading it from the gist link at the end of this post.

#Theremin plus, controlling pitch and ixi_techno phase by Robin Newman May 2018
#I control a continously playing note (length 2000)
use_debug false

define :putsPretty do |n,p|
  num=(n*10**p).round/(10**p).to_f
  return num
end

define :scalev do |v,l,h|
  return (l+v).to_f*(h-l)/100
end


with_fx :reverb,room: 0.8,mix: 0.8 do
  with_fx :ixi_techno,phase: 4,phase_offset: 1,mix: 0.8 do |p|
    set :p, p
    use_synth :subpulse
    k=play octs(0,2),sustain: 10000,amp: 0
    set :k,k
    live_loop :theremin do
      use_real_time
      b = sync "/osc/play_this"
      r1=scalev(b[0],30,100)
      r2=scalev(b[1],0.1,1)
      puts putsPretty(r1,2),putsPretty(r2,2)
      if r1  <  60 then #adjust note pitch, and restore volume to 0.7
        control get(:k),note: octs(r1+12,2),note_slide: 0.06 ,amp: 0.7,amp_slide: 0.2
      else #set output vol to 0
        control get(:k),amp: 0,amp_slide: 0.2
      end
      if r2 < 0.8 then #adjust phase modulation rate, and restore mix to 0.8
        control get(:p),phase: r2,phase_slide: 0.06,mix: 0.8,mix_slide: 0.2
      else #switch off phase modulation by setting mix to zero
        control get(:p),mix: 0,mix_slide: 0.2
      end
    end
  end
end

Looking at this you will notice that I use the effect :ixi_techno, and further down the program you will see that the :phase value, or the rate at which this effect modulates sounding notes inside the effect are controlled by the value of r2, the second output sent from the two ultrasound sensors. In my initial description of the “real” theremin I said that the two available controls adjusted the pitch or frequency of the notes played and the volume of those notes. Initially I tried adjusting the volume of the notes, and it can be done, but I didn’t find it very effective. Instead I opted to add this optional modulation to the playing note, but the nice thing about Sonic Pi is that it is fairly easy to adjust one of a variety of characteristics using the second number, and I hope to try out some others, and may even find a more satisfying result for volume adjustment in the end.

The main technique I use in the program is to start a very long note playing with zero volume, and then to use Sonic Pi’s ability to control the parameters of the note while it is playing. By adding sliding values to the parameters which change I can get a smoothly varying pitch for the note unlike the more simple theremin in the raspberrypi.org example. There are two functions defined at the start of the program. The first one putsPretty is used to print the numbers to just 2 decimal places. The second scalev  is used to scale the number (which arrive in the range 0->100 to whatever I need in the program. r1 is adjusted to range 30->100 and r2 to the range 0.1->1 Some reverb is applied to all of the output, and then the :ixi_techno effect is applied The |p| is a variable which gets a reference to the running effect. I store this in the time_state as :p so that it can be referenced and used to control the parameters of the effect further down the program. Similarly the long note is started and a reference to it is stored in the variable k in the line:

k=play octs(0,2),sustain: 10000,amp: 0

This is then subsequently stored in the time-state as :k so that it can be retrieved and used to alter the pitch of the note according to the value of r1. Notice the use of the octs function to play two notes an octave apart. Initially the pitch is 0 and the volume 0 as well, but this is altered by the control later on.

The meat of the program takes place in the live_loop :theremin This waits for an incoming OSC messages addressed to /play/this in the sending program (Sonic PI adds the initial /osc to show the source. This contains the data of the two numbers r1, and r2 in a list. They are retrieved, printed on the screen using the putsPretty function and then processed to control the note. a cutoff value of 60 is taken for the scaled  r1 (equivalent to 60cm). For values above this the volume of the note is set to 0 (with a slide time from any previous value of 0.02), and no note sounds. For values below 60 the pitch of the note is adjusted, and the volume restored to 0.7. Again slide times are used, 0.06 for the note adjustment (the time between successive numbers arriving) and a more sedate 0.2 for the volume adjustment. For(scaled)  r2 values < 0.8 the phase rate of the modulation is adjusted, and the mix value is set to 0.8. For values > 0.8 the mix value is set to 0, effectively disabling the effect. Again suitable slide times are chosen.

The one final “gotcha” to be aware of is that for Sonic Pi to receive OSC messages from an external source you need to enable this in the preferences IO tab. (see below)

When you do this, if the theremin.py script is still running, you should see lots of osc messages in the cues section of the Sonic Pi screen like this:

I you now run the Sonic Pi program, then you should be able to control what you hear by moving your hands in front of the two sensors.

I have made a film of the project in action which you can see here
Details of the circuit can be seen on the raspberrypi.org site here
You can download the software here

Advertisements

Touch sensitive input for Sonic Pi

overallview

Development of a touch sensitive keyboard for use with Sonic Pi, using Adafruit’s MPR121 12 toiuch sensitive input board with OSC support added to the software.

Full article, with link to software is here.

Video of project in action is here.

Sonic Pi 3 Player /Recorder version 1.2

Now released version 1.2 of my Player ?recorder for Sonic Pi 3 utilising a TouchOSC interface. This has expanded considerably since version 1, but as a result is now is not suitable for use on a Pi3. Also, as the program has increased in length it now needs to be run using the run_file command from a text file.(.rb)

Full details of the interface and usage are contained in a page here including a link to the code.

Fanfare for St Cecilia’s Hall played by Sonic Pi

Last week my Wife and I had the privilege of attending the official opening ceremony of St. Cecilia’s Hall in Edinburgh. This music hall, the oldest in Scotland, was purchased by Edinburgh University on 1959, and my Father, who was then Professor of Music at the Univerisity was instrumental (forgive the pun) i negotiating the housing of a collection of early musical keyboard instruments made by Raymond Russell. The Hall was opened in 1968 following this refurbishment.Since then the number of instruments in the Collection has increased, most notably by the acquisition of the Rodger Mirrey Collection of Early Keyboard Instruments. In January 2004 the decision was taken to add the Reid Concert Hall Museum of Instruments (the John Donaldson Collection of Musical Instruments).belonging to the University, and  major refurbishment program was started to give a fitting new home for all of the instruments. This took three years to complete, and the Hall was closed for over two years whilst the changes took place. It was opened again for use earlier this year, and last week was officially opened in a ceremony by H.R.H. The Princess Royal.

As part of that ceremony a musical Fanfare composed by Andrew Blair, Masters Student in Composition at the Reid School of Music in the University. He graciously gave me permission to take the score and transcribe the music for Sonic Pi to play. I typed the notes in by hand, as printed for two Bb trumpets, and applied a transposition of -2 semitones to give the correct concert pitch, and first produced a version using the built in :tri synth on Sonic Pi. With the addition of some reverb this produced a very acceptable performance. However in an effort to make it sound more authentic, I then tried using the trumpet samples from the Sonatina Symphonic Orchestra, for which I had previously written code which I have used extensively to produce sample based instruments for Sonic Pi. However in this case, I found that the samples didn’t work very well, particularly on some of the lower notes. I did some experimentation using the music program MuseScore 2. First I modified the Sonic Pi code to use midi commands instead of play commands, and I fed the output into MuseScore 2 setting up two instrument tracks with trumpet instruments. This gave a much better rendition, but was rather unwieldy.

I decided to generate my own trumpet samples from MuseScore 2. To do this, I chose the same notes as for the samples in the Sonatina Symphonic Library, namely
a#3,a#4,a#5,c#3,c#4,c#5,c#6,e3,e4,e5,e6,g3,g4,g5 (all concert pitch)
From these all other notes are produced by playing samples more slowly or faster. Using these pitches, I could use the samples with my existing code to handle the Sonatina Symphonic Samples. I set up a “score” to play single long notes for each of these pitches, with rests in between, and then played them in MuseScore 2, recording the output in Audacity. I then diced them up in audacity, producing individual .wav files for each of the pitches, and normalising them before saving the files. I tried the resulting samples with my Sonatina code and it worked perfectly.
So I produced the second version, using these samples which sounds very realistic.

You can download the code for both versions from my gist site here

You can hear what they sound like on soundcloud here

The opening of the Hall was a very moving experience for me, and ti was enhanced with Andrew’s brass fanfare, and with other music played on a Virginal from the collection, made by Stephen Keene in London in 1668

Two of the early keyboard instruments in the collection (plus my Wife in the distance!)

If you like musical instruments, and are visiting Edinburgh, I would encourage you to visit the Hall and the Museum. Their website is http://www.stcecilias.ed.ac.uk/ There is also an app which lets you explore the collection of instruments Apple version      Android version

You can read about the history of the hall on this site (It also contains pictures from the first refurbishment of the Hall in the 1960s).

LightshowPi drives a sensehat using Sonic Pi

Two years ago I produced a video showing the then Sonic Pi 2.7 driving the leds on a sense-hat using hacked lightshowPi software using the then current version. I didn’t write up the software which was quite complex, but merely posted a video of the system operating on youtube. Recently I was asked on twitter if I could share the code, as some students in California wanted to try it out. I managed to located the SD card containing the software, and fired it up on a Raspberry Pi2 using the then prevalent Sonic Pi and operating system and got it working. I then decided to see if I could update it to run on the latest OS on a Pi3 and with a later version of Sonic PI. I have now managed to do this and am publishing the code so that others can try it out.

The system also requires an external USB audio card, to give an audio input that can be used by LightshowPi. I used a sabrent usb module which I obtained on Amazon (UK) which is also available in the states here Three other pieces of hardware are required (apart from a sensehat!). A two way stereo audio splitter lead (3.5mm jack to 2 3.5mm sockets), a stereo 3.5mm jack to jack lead, and a short usb extender lead as the Sabrent audio card is a little fat to plug into the Pi directly.

System overview
PLug the sensehat into a Pi3 running Raspbian Stretch, set the default audio to the built in bcm2835 ALSA card using Prefereces=> Audio Device Settings on the Pi main menu. Click Select Controls if you can’t see the Playback slider, and make sure it is active and at maximum, before clicking OK and start Sonic Pi running. Use a “punchy” piece like my sparkling tom-toms for best effect. The audio output from this is fed to a two way splitter plugged into the audio out 3.5mm socket on the Pi. One of these outputs goes to an external audio amp so you can hear the sound, the other is fed to the microphone input on the Sabrent USB audio card. This supplies an audio feed to the modified lightshowpi code. You can see the wiring below. The dual splitter plugged into the 3.5mm audio out socket, with a red lead going to an audio amplifier and speakers (out of the photo) and the white lead fed back into the microphone input to the Sabrent usb audio card which is plugged into the Pi via a short extender lead, as it is rather fat to plug in directly. Other leads are to the hdmi monitor, the power supply and to a usb keyboard and usb mouse (all out of the picture).


Install the current version of lightshowpi using the instructions at http://lightshowpi.org/download-and-install/

Download my modified code for lightshowpi from here  and copy the file to the top level of the lightshowpi folder. Right-click the file lsp_sensehat.tar.gz (in the gui) and select extract here, and two folders will be created, py_sensehat and sensehat_config These contain the modified files set up to utilise the sense hat rather than individual leds connected to the gpio pins. Essentially this is based on an earlier version of lightshowpi. The reason for installing the latest version is to install the various background files and utilities used by the lightshowpi files. You can see the contents of the lightshowpi folder with the two additional folders from the lsp_sensehat.tar.gz file below.
Before starting to use the system, you should check that the microphone input sensitivity is setup. to do this select Audio Device Settings from Preferences on the Raspberry Pi main menu. Select the USB Audio Device (Alsa mixer) from the Sound Card list at the top, click Select Controls and select Speaker and Microphone from the list (NOT Microphone Capture). Click Close, and adjust the Microphone slider to maximum, and make sure it is enabled in the box below the slider. You can disable the Speaker output in the left hand box below the two Speaker sliders as we are not using audio output via the USB card. Then close the Audio Device Settings window. NB make sure that bcm2835 ALSA is still marked as default, as shown in the FIRST Audio Device Settings picture above. Below you can see the settings for the USB Audio card.
To start lsp running do the following from a terminal:

cd ~
cd lightshowpi/py_sensehat
sudo python synchronized_lights.py

You should see the message Running in audio-in mode, use Ctrl+C to stop on the screen. All being well, if you start Sonic Pi playing then after a pause of a few seonds as audio samples are built up by lightshowpi the sensehat leds should start flashing in response to the music You need quite loud music for it to work.

How does it all work?
I’m not going to go into great detail, as it would take too long, but I can give one or two pointers. Lightshowpi is a system which can take an audio input from an audio input or from a music file eg .wav, .mp3 and analyse it using fast fourier transforms to produce a series of outputs for different ranges in the audio spectrum. These can then be channeled to control LEDs connected to the gpio pins on the raspberry pi. What I have done is to hack the code that channels the outputs to the gpio pins, and instead send them to a python program testled.py within the py_sensehat folder which contains a series of function which can control LEDs on the sense hat. In the program I use columns of LEDs coloured with a rainbow, which can be switched on or off. using turn_on_light(num) and turn_off_light(num). These two commands are used in the hacked hardware_controller.py program in the py_sensehat folder, replacing calls to wiringpi which are commented out. The configuration settings for using the sensehat are contained in the sensehat_config folder. In particular the overrides.cfg file in that folder is worth looking at. As supplied the columns in the sensehat are triggered by different frequency ranges starting from low at one side of the sensehat up to high at the other. In the last line of the overrides.cfg file there is a commented out line which allows a different channel mapping which gives a symmetrical output which you may find preferable. To try it out uncomment the line and restart the synchronized_lights program. Notice also the details of the audio card which has the internal name ‘Device’ which you can see if you type aplay -l in a terminal.
The only update I had to do to my hacked files to work with the latest lightshowpi was to replace a reference to import wiringpi2 as wiringpi with import wiringpi as wiringpi as this module appears to have changed its names between versions, and is installed by the latest lightshowpi.
If I had more time, further work on integrating the sensehat to lightshowpi could be done, so that it would fit into the latest and any subsequent version, but I leave that to any other interested party to take on.

Finally, if you like this project you may also be interested in a project to flash the lights on the PiHat Christmas Tree which I have recently described here

revised simpler polyphonic gated synth using Sonic Pi 3

further simplepolysynth2.rb version added to the code link for this article for use with keyboards with separate note_on note_off midi commands, rather than those using note_on with velocity 0 for note_off  (April 2018)

Back in August I developed quite a complex program to play 8 note polyphony using Sonic Pi synths. The program worked well, but it was too demanding on resources to work on a Raspberry Pi. Following on from a thread in the Sonic Pi in-thread.sonic-pi.net site, I decided to look at the problem again, and this time I ended up with a simpler, shorter program which was capable of playing multi-note polyphony using a midi controller to drive Sonic Pi 3 which also worked OK on a Raspberry Pi 3, as well as more powerful machines like a Mac.

First of all, why would you want such a program? In a standard DAW midi note inputs can be played, starting a note sounding when a note_on signal is received, and keeping the note going until a corresponding note_off (or more usually a note_on with zero velocity value) is received. On Sonic Pi, synths work slightly differently. You need to specify the pitch to play, but also the duration of the note as it starts. This is no problem if all the notes have the same duration, or if you know the duration of the note when you start it playing, but it can’t handle keyboard input where the player can change the length of the notes at will depending on how long the input key is depressed. The DAW uses what is called a gated synth which can be switched on and off at will by the start and stop signals. What this program does is to simulate that by starting a “long” note of the required pitch playing when the midi note_on signal is received, and then waiting until the note_off (or note_on with zero velocity) signal is received, when the note is killed, thus achieving the same effect.

if you only have one note at a time playing this is fairly easy to do, but if you have more than one note (polyphony) then you have to keep track of notes that are playing and choose the right one to kill. Another feature I wanted to include was the ability to apply pitch-bend to any playing note, which means that you had to be able to control the note while it was playing.

The final program to do all of this is shown below. To a certain extent it breaks the rules of how to pass data between running live_loops, but it does seem to work pretty well.

#polyphonic midi input program with sustained notes
#experimental program by Robin Newman, November 2017
#pitchbend can be applied to notes at any time while they are sounding
use_debug false

set :synth,:tb303 #initial value
set :pb,0 #pitchbend initial value
kill_list=[] #list to contain notes to be killed
on_notes=[] #list of notes currently playing
ns=[] #array to store note playing references
nv=[0]*128 #array to store state of note for a particlar pitch 1=on, 0=off

128.times do |i|
  ns[i]=("n"+i.to_s).to_sym #set up array of symbols :n0 ...:n127
end
#puts ns #for testing

define :sv do |sym| #extract numeric value associated with symbol eg :n64 => 64
  return sym.to_s[1..-1].to_i
end
#puts sv(ns[64]) #for testing

live_loop :choose_synth do
  b= sync "/midi/*/*/*/control_change" #use wild cards to works with any controller
  if b[0]==10 #adjust control number to suit your controller
    sc=(b[1].to_f/127*3 ).to_i
    set :synth,[:tri,:saw,:tb303,:fm][sc] #can change synth list if you wish
    puts "Synth #{get(:synth)} selected"
  end
end

live_loop :pb do #get current pitchbend value adjusted in range -12 to +12 (octave)
  b = sync "/midi/*/*/*/pitch_bend" #change to match your controller
  set :pb,(b[0]-8192).to_f/8192*12
end
with_fx :reverb,room: 0.8,mix: 0.6 do #add some reverb
  
  live_loop :midi_note_on do #this loop starts 100 second notes for specified pitches and stores reference
    use_real_time
    note, on = sync "/midi/*/*/*/note_on" #change to match your controller
    if on >0
      if nv[note]==0 #check if new start for the note
        puts "setting note #{note} on"
        vn=on.to_f/127
        nv[note]=1 #mark note as started for this pitch
        use_synth get(:synth)
        x = play note+get(:pb),attack: 0.01, sustain: 100,amp: vn #start playing note
        set ns[note],x #store reference to note in ns array
        on_notes.push [note,vn] #add note to list of notes playing
      end
    else
      if nv[note]==1 #check if this pitch is on
        nv[note]=0 #set this pitch off
        kill_list.push note #add note to list of notes to kill
      end
    end
  end
  
  live_loop :processnote,auto_cue: false,delay: 0.4 do # this applies pitchbend if any to note as it plays
    #delayed start helps reduce timing errors
    use_real_time
    if on_notes.length > 0 #check if any notes on
      k=on_notes.pop #get next note from "on" list
      puts "processing note #{k[0]}"
      in_thread do #start a thread to apply pitchbend to the note every 0.05 seconds
        v=get(ns[k[0]]) #retrieve control value for the note
        while nv[k[0]]==1 #while the note is still merked as on
          control v,note: k[0]+get(:pb),note_slide: 0.05,amp: k[1]
          sleep 0.05
        end
        #belt and braces kill here as well as in notekill liveloop: catches any that miss
        control v,amp: 0,amp_slide: 0.02 #fade note out in 0.02 seconds
        sleep 0.02
        puts "backup kill note #{k[0]}"
        kill v #kill the note referred to in ns array
      end
    end
    sleep 0.08 #so that the loop sleeps if no notes on
  end
  
  live_loop :notekill,auto_cue: false,delay: 0.3 do # this loop kills released notes
    #delayed start helps reduce timing errors
    use_real_time
    while kill_list.length > 0 #check if there are notes to be killed
      k=kill_list.pop #get next note to kill
      puts "killing note #{k}"
      v=get(ns[k]) #retrieve reference to the note
      control v,amp: 0,amp_slide: 0.02 #fade note out in 0.02 seconds
      sleep 0.02
      kill v #kill the note referred to in ns array
    end
    sleep 0.01 #so that the loop sleeps if no notes to be killed
  end
end #reverb

The program starts by initialising various variables.

:synth is set to contains the current synth to use, which can optionally be changed if your input controller supports a suitable input device. In my case I used an M-Audio Oxygen8 v2 keyboard with a series of rotary potentiometer inputs, and used one of these to select the synth.

:pb is set to contain the current pitchbend offset (scaled in a range +12 to -12 ie up or down an octave. Later in the program it is altered by the output of the pichbend wheel on the keyboard.

four lists hold data used to select notes to be operated upon. kill-list contains a list of notes which have stopped being played and which need to be killed off. on-notes contains a list of notes which are currently playing together with their volume settings. ns is a list which contains references to the control value for notes which are playing, indexed by the note value. It has 128 entries, one for each possible midi note 0-127.Finally nv is a list which contains a 1 for a note which is playing and a 0 if it is not. This list has 128 values corresponding to midi values 0-127

The ns list is filled with symbols :n0, :n1….:n127 Thse are used with a set command to store the control value when a particular note is played. A corresponjding function sv converts a symbol back to the numeric midi value with which it is associated. Thus sv(:n64)=>64

The first live loop :choose_synth detects a control change on the selected channel (10 for my rotary control) and reads the rotary position in range 0-127. It converts this to an integer range 0-3 which is then used to select a synth from a list and store it using a set :synth command.

live_loop :pb is triggered by changes in the pitchbend input, and scales the input 0-16384 in the range -12 to +12 and stores the result using set :pb

The following live_loops are responsible for producing sounds, and they are wrapped in a with_fx :reverb command. The first live_loop :midi_note_on is set to use real time for a fast response, and waits to detect a note_on signal, storing the note value concerned in the variable note. The second parameter on stores the associated velocity. This will be > 0 if the note is starting and 0 if it is finishing. If on is > 0 the loop tests if the note is already playing by checking the entry in nv[note] this will be 0 if a note of that pitch is not playing and 1 if it is. If it is not already playing it prints a message saying the note is starting, and calculates the note volume by scaling on  to the range 0->1 storing the result in vn. It sets nv[note] to 1 to signify the note is playing, then retrieves and sets the current synth to use, and starts a “long” note of duration 100 playing at the designated pitch (modified by any pb value). So that the note can be controlled a control value x is set and stored, pointed to by the appropriate symbol in the ns list. The note value is added to the list of notes playing notes_on together with its volume vn.
So that this live_loop can continue as quickly as possible, further action on the note is controlled by separate live_loops :processinote and :notekill
Finally, the :midi_note_on live loop also deals with the case where on is zero, ie a key on the keyboard input has been released. In this case, it checks to see if nv[note] is 1 ie the note is playing and if so sets nv[note] to 0 to signify it is being stopped, and adds the note value to the kill_list. Again it leaves the kill process to a different live_loop so that the midi_note_on loop is ready to process the next keyboard input as soon as possible.

live_loop :processnote is used to control the note while it is playing, by starting a thread to control the pitch of the note if the pitchbend value in :pb is altered. Again this live_loop works in real_time to give a rapid response. First it checks to see if there are any playing notes by looking at the length of the on_notes list. If there are notes there it extracts the first note value and its volume to a list variable k, and prints a message on the screen to signify this. In the thread it adjusts the pitch of the note if necessary while the value of nv[k[0]] is still 1, ie the note is still playing. When the value of nv[k[0]] changes to 0, it passes on to fade the note to 0 volume and then kill it. This is belt and braces, as the note should be killed by the :notekill live_loop more quickly. However I found that occasionally this could be missed, and so this second backup ensured that the note was killed. Sonic Pi objects with a message saying it can’t kill a sound that has already been killed, but otherwise it seems happy. If you look at the output log you will see that occasionally it is the :processnote loop that kills the note rather than the :notekill one.

live_loop :notekill is purposed to kill a note as quickly as possible after it has been released by the midi keyboard, ie as soon as possible after a note_on with 0 velocity parameter has been detected. Again it works in real time to make it as responsive as possible. It looks at the length of the kill_list and if there are entries there it retrieves the next one. It puts a message on the screen, and then retrieves the control value for the given note. It fades the note to zero volume, to avoid clicks and then kills it

You will notice that there are start delays on the :processnote and :notekill live_loops. This was found to reduce the loading when running on a raspberry pi and therefore reduce the incidence of timing errors warnings.

One final but of “Ruby” that I have used is to use the syntax #{variablename} when printing a variable value using the puts command. It gives a slightly cleaner output.

An accompanying video is here

The code can be downloaded here