Fanfare for St Cecilia’s Hall played by Sonic Pi

Last week my Wife and I had the privilege of attending the official opening ceremony of St. Cecilia’s Hall in Edinburgh. This music hall, the oldest in Scotland, was purchased by Edinburgh University on 1959, and my Father, who was then Professor of Music at the Univerisity was instrumental (forgive the pun) i negotiating the housing of a collection of early musical keyboard instruments made by Raymond Russell. The Hall was opened in 1968 following this refurbishment.Since then the number of instruments in the Collection has increased, most notably by the acquisition of the Rodger Mirrey Collection of Early Keyboard Instruments. In January 2004 the decision was taken to add the Reid Concert Hall Museum of Instruments (the John Donaldson Collection of Musical Instruments).belonging to the University, and  major refurbishment program was started to give a fitting new home for all of the instruments. This took three years to complete, and the Hall was closed for over two years whilst the changes took place. It was opened again for use earlier this year, and last week was officially opened in a ceremony by H.R.H. The Princess Royal.

As part of that ceremony a musical Fanfare composed by Andrew Blair, Masters Student in Composition at the Reid School of Music in the University. He graciously gave me permission to take the score and transcribe the music for Sonic Pi to play. I typed the notes in by hand, as printed for two Bb trumpets, and applied a transposition of -2 semitones to give the correct concert pitch, and first produced a version using the built in :tri synth on Sonic Pi. With the addition of some reverb this produced a very acceptable performance. However in an effort to make it sound more authentic, I then tried using the trumpet samples from the Sonatina Symphonic Orchestra, for which I had previously written code which I have used extensively to produce sample based instruments for Sonic Pi. However in this case, I found that the samples didn’t work very well, particularly on some of the lower notes. I did some experimentation using the music program MuseScore 2. First I modified the Sonic Pi code to use midi commands instead of play commands, and I fed the output into MuseScore 2 setting up two instrument tracks with trumpet instruments. This gave a much better rendition, but was rather unwieldy.

I decided to generate my own trumpet samples from MuseScore 2. To do this, I chose the same notes as for the samples in the Sonatina Symphonic Library, namely
a#3,a#4,a#5,c#3,c#4,c#5,c#6,e3,e4,e5,e6,g3,g4,g5 (all concert pitch)
From these all other notes are produced by playing samples more slowly or faster. Using these pitches, I could use the samples with my existing code to handle the Sonatina Symphonic Samples. I set up a “score” to play single long notes for each of these pitches, with rests in between, and then played them in MuseScore 2, recording the output in Audacity. I then diced them up in audacity, producing individual .wav files for each of the pitches, and normalising them before saving the files. I tried the resulting samples with my Sonatina code and it worked perfectly.
So I produced the second version, using these samples which sounds very realistic.

You can download the code for both versions from my gist site here

You can hear what they sound like on soundcloud here

The opening of the Hall was a very moving experience for me, and ti was enhanced with Andrew’s brass fanfare, and with other music played on a Virginal from the collection, made by Stephen Keene in London in 1668

Two of the early keyboard instruments in the collection (plus my Wife in the distance!)

If you like musical instruments, and are visiting Edinburgh, I would encourage you to visit the Hall and the Museum. Their website is http://www.stcecilias.ed.ac.uk/ There is also an app which lets you explore the collection of instruments Apple version      Android version

You can read about the history of the hall on this site (It also contains pictures from the first refurbishment of the Hall in the 1960s).

Advertisements

LightshowPi drives a sensehat using Sonic Pi

Two years ago I produced a video showing the then Sonic Pi 2.7 driving the leds on a sense-hat using hacked lightshowPi software using the then current version. I didn’t write up the software which was quite complex, but merely posted a video of the system operating on youtube. Recently I was asked on twitter if I could share the code, as some students in California wanted to try it out. I managed to located the SD card containing the software, and fired it up on a Raspberry Pi2 using the then prevalent Sonic Pi and operating system and got it working. I then decided to see if I could update it to run on the latest OS on a Pi3 and with a later version of Sonic PI. I have now managed to do this and am publishing the code so that others can try it out.

The system also requires an external USB audio card, to give an audio input that can be used by LightshowPi. I used a sabrent usb module which I obtained on Amazon (UK) which is also available in the states here Three other pieces of hardware are required (apart from a sensehat!). A two way stereo audio splitter lead (3.5mm jack to 2 3.5mm sockets), a stereo 3.5mm jack to jack lead, and a short usb extender lead as the Sabrent audio card is a little fat to plug into the Pi directly.

System overview
PLug the sensehat into a Pi3 running Raspbian Stretch, set the default audio to the built in bcm2835 ALSA card using Prefereces=> Audio Device Settings on the Pi main menu. Click Select Controls if you can’t see the Playback slider, and make sure it is active and at maximum, before clicking OK and start Sonic Pi running. Use a “punchy” piece like my sparkling tom-toms for best effect. The audio output from this is fed to a two way splitter plugged into the audio out 3.5mm socket on the Pi. One of these outputs goes to an external audio amp so you can hear the sound, the other is fed to the microphone input on the Sabrent USB audio card. This supplies an audio feed to the modified lightshowpi code. You can see the wiring below. The dual splitter plugged into the 3.5mm audio out socket, with a red lead going to an audio amplifier and speakers (out of the photo) and the white lead fed back into the microphone input to the Sabrent usb audio card which is plugged into the Pi via a short extender lead, as it is rather fat to plug in directly. Other leads are to the hdmi monitor, the power supply and to a usb keyboard and usb mouse (all out of the picture).


Install the current version of lightshowpi using the instructions at http://lightshowpi.org/download-and-install/

Download my modified code for lightshowpi from here  and copy the file to the top level of the lightshowpi folder. Right-click the file lsp_sensehat.tar.gz (in the gui) and select extract here, and two folders will be created, py_sensehat and sensehat_config These contain the modified files set up to utilise the sense hat rather than individual leds connected to the gpio pins. Essentially this is based on an earlier version of lightshowpi. The reason for installing the latest version is to install the various background files and utilities used by the lightshowpi files. You can see the contents of the lightshowpi folder with the two additional folders from the lsp_sensehat.tar.gz file below.
Before starting to use the system, you should check that the microphone input sensitivity is setup. to do this select Audio Device Settings from Preferences on the Raspberry Pi main menu. Select the USB Audio Device (Alsa mixer) from the Sound Card list at the top, click Select Controls and select Speaker and Microphone from the list (NOT Microphone Capture). Click Close, and adjust the Microphone slider to maximum, and make sure it is enabled in the box below the slider. You can disable the Speaker output in the left hand box below the two Speaker sliders as we are not using audio output via the USB card. Then close the Audio Device Settings window. NB make sure that bcm2835 ALSA is still marked as default, as shown in the FIRST Audio Device Settings picture above. Below you can see the settings for the USB Audio card.
To start lsp running do the following from a terminal:

cd ~
cd lightshowpi/py_sensehat
sudo python synchronized_lights.py

You should see the message Running in audio-in mode, use Ctrl+C to stop on the screen. All being well, if you start Sonic Pi playing then after a pause of a few seonds as audio samples are built up by lightshowpi the sensehat leds should start flashing in response to the music You need quite loud music for it to work.

How does it all work?
I’m not going to go into great detail, as it would take too long, but I can give one or two pointers. Lightshowpi is a system which can take an audio input from an audio input or from a music file eg .wav, .mp3 and analyse it using fast fourier transforms to produce a series of outputs for different ranges in the audio spectrum. These can then be channeled to control LEDs connected to the gpio pins on the raspberry pi. What I have done is to hack the code that channels the outputs to the gpio pins, and instead send them to a python program testled.py within the py_sensehat folder which contains a series of function which can control LEDs on the sense hat. In the program I use columns of LEDs coloured with a rainbow, which can be switched on or off. using turn_on_light(num) and turn_off_light(num). These two commands are used in the hacked hardware_controller.py program in the py_sensehat folder, replacing calls to wiringpi which are commented out. The configuration settings for using the sensehat are contained in the sensehat_config folder. In particular the overrides.cfg file in that folder is worth looking at. As supplied the columns in the sensehat are triggered by different frequency ranges starting from low at one side of the sensehat up to high at the other. In the last line of the overrides.cfg file there is a commented out line which allows a different channel mapping which gives a symmetrical output which you may find preferable. To try it out uncomment the line and restart the synchronized_lights program. Notice also the details of the audio card which has the internal name ‘Device’ which you can see if you type aplay -l in a terminal.
The only update I had to do to my hacked files to work with the latest lightshowpi was to replace a reference to import wiringpi2 as wiringpi with import wiringpi as wiringpi as this module appears to have changed its names between versions, and is installed by the latest lightshowpi.
If I had more time, further work on integrating the sensehat to lightshowpi could be done, so that it would fit into the latest and any subsequent version, but I leave that to any other interested party to take on.

Finally, if you like this project you may also be interested in a project to flash the lights on the PiHat Christmas Tree which I have recently described here

revised simpler polyphonic gated synth using Sonic Pi 3

Back in August I developed quite a complex program to play 8 note polyphony using Sonic Pi synths. The program worked well, but it was too demanding on resources to work on a Raspberry Pi. Following on from a thread in the Sonic Pi in-thread.sonic-pi.net site, I decided to look at the problem again, and this time I ended up with a simpler, shorter program which was capable of playing multi-note polyphony using a midi controller to drive Sonic Pi 3 which also worked OK on a Raspberry Pi 3, as well as more powerful machines like a Mac.

First of all, why would you want such a program? In a standard DAW midi note inputs can be played, starting a note sounding when a note_on signal is received, and keeping the note going until a corresponding note_off (or more usually a note_on with zero velocity value) is received. On Sonic Pi, synths work slightly differently. You need to specify the pitch to play, but also the duration of the note as it starts. This is no problem if all the notes have the same duration, or if you know the duration of the note when you start it playing, but it can’t handle keyboard input where the player can change the length of the notes at will depending on how long the input key is depressed. The DAW uses what is called a gated synth which can be switched on and off at will by the start and stop signals. What this program does is to simulate that by starting a “long” note of the required pitch playing when the midi note_on signal is received, and then waiting until the note_off (or note_on with zero velocity) signal is received, when the note is killed, thus achieving the same effect.

if you only have one note at a time playing this is fairly easy to do, but if you have more than one note (polyphony) then you have to keep track of notes that are playing and choose the right one to kill. Another feature I wanted to include was the ability to apply pitch-bend to any playing note, which means that you had to be able to control the note while it was playing.

The final program to do all of this is shown below. To a certain extent it breaks the rules of how to pass data between running live_loops, but it does seem to work pretty well.

#polyphonic midi input program with sustained notes
#experimental program by Robin Newman, November 2017
#pitchbend can be applied to notes at any time while they are sounding
use_debug false

set :synth,:tb303 #initial value
set :pb,0 #pitchbend initial value
kill_list=[] #list to contain notes to be killed
on_notes=[] #list of notes currently playing
ns=[] #array to store note playing references
nv=[0]*128 #array to store state of note for a particlar pitch 1=on, 0=off

128.times do |i|
  ns[i]=("n"+i.to_s).to_sym #set up array of symbols :n0 ...:n127
end
#puts ns #for testing

define :sv do |sym| #extract numeric value associated with symbol eg :n64 => 64
  return sym.to_s[1..-1].to_i
end
#puts sv(ns[64]) #for testing

live_loop :choose_synth do
  b= sync "/midi/*/*/*/control_change" #use wild cards to works with any controller
  if b[0]==10 #adjust control number to suit your controller
    sc=(b[1].to_f/127*3 ).to_i
    set :synth,[:tri,:saw,:tb303,:fm][sc] #can change synth list if you wish
    puts "Synth #{get(:synth)} selected"
  end
end

live_loop :pb do #get current pitchbend value adjusted in range -12 to +12 (octave)
  b = sync "/midi/*/*/*/pitch_bend" #change to match your controller
  set :pb,(b[0]-8192).to_f/8192*12
end
with_fx :reverb,room: 0.8,mix: 0.6 do #add some reverb
  
  live_loop :midi_note_on do #this loop starts 100 second notes for specified pitches and stores reference
    use_real_time
    note, on = sync "/midi/*/*/*/note_on" #change to match your controller
    if on >0
      if nv[note]==0 #check if new start for the note
        puts "setting note #{note} on"
        vn=on.to_f/127
        nv[note]=1 #mark note as started for this pitch
        use_synth get(:synth)
        x = play note+get(:pb),attack: 0.01, sustain: 100,amp: vn #start playing note
        set ns[note],x #store reference to note in ns array
        on_notes.push [note,vn] #add note to list of notes playing
      end
    else
      if nv[note]==1 #check if this pitch is on
        nv[note]=0 #set this pitch off
        kill_list.push note #add note to list of notes to kill
      end
    end
  end
  
  live_loop :processnote,auto_cue: false,delay: 0.4 do # this applies pitchbend if any to note as it plays
    #delayed start helps reduce timing errors
    use_real_time
    if on_notes.length > 0 #check if any notes on
      k=on_notes.pop #get next note from "on" list
      puts "processing note #{k[0]}"
      in_thread do #start a thread to apply pitchbend to the note every 0.05 seconds
        v=get(ns[k[0]]) #retrieve control value for the note
        while nv[k[0]]==1 #while the note is still merked as on
          control v,note: k[0]+get(:pb),note_slide: 0.05,amp: k[1]
          sleep 0.05
        end
        #belt and braces kill here as well as in notekill liveloop: catches any that miss
        control v,amp: 0,amp_slide: 0.02 #fade note out in 0.02 seconds
        sleep 0.02
        puts "backup kill note #{k[0]}"
        kill v #kill the note referred to in ns array
      end
    end
    sleep 0.08 #so that the loop sleeps if no notes on
  end
  
  live_loop :notekill,auto_cue: false,delay: 0.3 do # this loop kills released notes
    #delayed start helps reduce timing errors
    use_real_time
    while kill_list.length > 0 #check if there are notes to be killed
      k=kill_list.pop #get next note to kill
      puts "killing note #{k}"
      v=get(ns[k]) #retrieve reference to the note
      control v,amp: 0,amp_slide: 0.02 #fade note out in 0.02 seconds
      sleep 0.02
      kill v #kill the note referred to in ns array
    end
    sleep 0.01 #so that the loop sleeps if no notes to be killed
  end
end #reverb

The program starts by initialising various variables.

:synth is set to contains the current synth to use, which can optionally be changed if your input controller supports a suitable input device. In my case I used an M-Audio Oxygen8 v2 keyboard with a series of rotary potentiometer inputs, and used one of these to select the synth.

:pb is set to contain the current pitchbend offset (scaled in a range +12 to -12 ie up or down an octave. Later in the program it is altered by the output of the pichbend wheel on the keyboard.

four lists hold data used to select notes to be operated upon. kill-list contains a list of notes which have stopped being played and which need to be killed off. on-notes contains a list of notes which are currently playing together with their volume settings. ns is a list which contains references to the control value for notes which are playing, indexed by the note value. It has 128 entries, one for each possible midi note 0-127.Finally nv is a list which contains a 1 for a note which is playing and a 0 if it is not. This list has 128 values corresponding to midi values 0-127

The ns list is filled with symbols :n0, :n1….:n127 Thse are used with a set command to store the control value when a particular note is played. A corresponjding function sv converts a symbol back to the numeric midi value with which it is associated. Thus sv(:n64)=>64

The first live loop :choose_synth detects a control change on the selected channel (10 for my rotary control) and reads the rotary position in range 0-127. It converts this to an integer range 0-3 which is then used to select a synth from a list and store it using a set :synth command.

live_loop :pb is triggered by changes in the pitchbend input, and scales the input 0-16384 in the range -12 to +12 and stores the result using set :pb

The following live_loops are responsible for producing sounds, and they are wrapped in a with_fx :reverb command. The first live_loop :midi_note_on is set to use real time for a fast response, and waits to detect a note_on signal, storing the note value concerned in the variable note. The second parameter on stores the associated velocity. This will be > 0 if the note is starting and 0 if it is finishing. If on is > 0 the loop tests if the note is already playing by checking the entry in nv[note] this will be 0 if a note of that pitch is not playing and 1 if it is. If it is not already playing it prints a message saying the note is starting, and calculates the note volume by scaling on  to the range 0->1 storing the result in vn. It sets nv[note] to 1 to signify the note is playing, then retrieves and sets the current synth to use, and starts a “long” note of duration 100 playing at the designated pitch (modified by any pb value). So that the note can be controlled a control value x is set and stored, pointed to by the appropriate symbol in the ns list. The note value is added to the list of notes playing notes_on together with its volume vn.
So that this live_loop can continue as quickly as possible, further action on the note is controlled by separate live_loops :processinote and :notekill
Finally, the :midi_note_on live loop also deals with the case where on is zero, ie a key on the keyboard input has been released. In this case, it checks to see if nv[note] is 1 ie the note is playing and if so sets nv[note] to 0 to signify it is being stopped, and adds the note value to the kill_list. Again it leaves the kill process to a different live_loop so that the midi_note_on loop is ready to process the next keyboard input as soon as possible.

live_loop :processnote is used to control the note while it is playing, by starting a thread to control the pitch of the note if the pitchbend value in :pb is altered. Again this live_loop works in real_time to give a rapid response. First it checks to see if there are any playing notes by looking at the length of the on_notes list. If there are notes there it extracts the first note value and its volume to a list variable k, and prints a message on the screen to signify this. In the thread it adjusts the pitch of the note if necessary while the value of nv[k[0]] is still 1, ie the note is still playing. When the value of nv[k[0]] changes to 0, it passes on to fade the note to 0 volume and then kill it. This is belt and braces, as the note should be killed by the :notekill live_loop more quickly. However I found that occasionally this could be missed, and so this second backup ensured that the note was killed. Sonic Pi objects with a message saying it can’t kill a sound that has already been killed, but otherwise it seems happy. If you look at the output log you will see that occasionally it is the :processnote loop that kills the note rather than the :notekill one.

live_loop :notekill is purposed to kill a note as quickly as possible after it has been released by the midi keyboard, ie as soon as possible after a note_on with 0 velocity parameter has been detected. Again it works in real time to make it as responsive as possible. It looks at the length of the kill_list and if there are entries there it retrieves the next one. It puts a message on the screen, and then retrieves the control value for the given note. It fades the note to zero volume, to avoid clicks and then kills it

You will notice that there are start delays on the :processnote and :notekill live_loops. This was found to reduce the loading when running on a raspberry pi and therefore reduce the incidence of timing errors warnings.

One final but of “Ruby” that I have used is to use the syntax #{variablename} when printing a variable value using the puts command. It gives a slightly cleaner output.

An accompanying video is here

The code can be downloaded here

A visualiser for Sonic PI (resizable version)

I received a request to see if the visualiser I recently published could a) be used on a split screen with Sonic Pi, each taking up half the screen, and b) be used on a second monitor attached to the computer.I looked at the code, and produced this version which uses a resizable screen, which can also be dragged onto a a second screen monitor if attached to the main computer. The main change is in the initial screen setup. This is now all done in the setup function, not in the settings function which becomes redundant. Also all references to displayHeight and displayWidth are replaced by height and width respectively.

In testing the new version, I found that the star routine could causeit to crash. This was down to the way I was generating the parameters, and I have altered the star function calls so that the second radius parameter no longer uses a / when computing its value. The two calls are shown below:

First version:

star(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, circles[i]*stardata[0], circles[i]/stardata[1], int(stardata[2]+random(stardata[3])));

New resizable version:

star(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, circles[i]*stardata[0]*0.25, circles[i]*stardata[1]*0.25, int(stardata[2]+random(stardata[3])));

The new processing code is shown below:

//Visualiser for use with Sonic Pi 3 written by Robin Newman, September 2017
// based on an original sketch https://github.com/andrele/Starburst-Music-Viz
//THIS VERSION FOR RESIZEABLE SCREEN: CAN BE GRAGGED TO SECOND MONITOR
//Changes: changed to resizable screen, updated for Processing 3, added colour, added rectangle and star shapes
//added OSC input (from Sonic Pi) to alter parameters as it runs, removed slider inputs.
//input OSC:   /viz/float       updates STROKE_MAX, STROKE_MIN and audioThresh
//             /viz/pos         updates XY random offset values (can be zero)
//             /viz/col         updates RGB values
//             /viz/shapes      sets shapes to be used from S, E and R (or combinations thereof)
//             /viz/stardata    sets data for star shapes
//             /viz/rotval      turns rotation of shapes on/off
//             /viz/shift       turns XY shift across screen on/off
//             /viz/stop        initiates sending stop all signal back to Sonic Pi port 4557

import ddf.minim.analysis.FFT;
import ddf.minim.*;
import oscP5.*; //to support OSC server
import netP5.*;

Minim minim;
AudioInput input;
FFT fftLog;

int recvPort = 5000; //can change to whatever is convenient. Match with use_osc comand in Sonic Pi
OscP5 oscP5;
NetAddress myRemoteLocation; //used to send stop command b ack to Sonic PI

// Setup params
color bgColor = color(0, 0, 0);

// Modifiable parameters
float STROKE_MAX = 10;
float STROKE_MIN = 2;
float audioThresh = .9;
float[] circles = new float[29];
float DECAY_RATE = 2;
//variables for OSC input
float [] fvalues = new float[5]; //STROKE_MAX, STROKE_MIN,audioThresh values
int [] cols = new int[3]; //r,g,b colours
int [] positions = new int[2];// random offset scales for X,Y
int [] stardata = new int[4];// data for star shape, number of points, random variation
int shiftflag = 0; //flag to control xy drift across the screen set by OSC message
int two = 0; //variable to force concentric shapes when more than one is displayed
String shapes = "E"; //shapes to be displayed, including multiples from S,E,R
int rotval =0;
int xoffset = 0,yoffset = 0;
int xdirflag = 1,ydirflag = 1;

void setup() { 
 size(400, 400,P2D);
 frame.setResizable(true); 
 //noLoop();
  frameRate(60);
   myRemoteLocation = new NetAddress("127.0.0.1",4557); //address to send commands to Sonic Pi
  minim = new Minim(this);
  input = minim.getLineIn(Minim.MONO, 2048); //nb static field MONO referenced from class not instance hence Minim not minim

  fftLog = new FFT( input.bufferSize(), input.sampleRate()); //setup logarithmic fast fourier transform
  fftLog.logAverages( 22, 3); // see http://code.compartmental.net/minim/fft_method_logaverages.html

  noFill();
  ellipseMode(RADIUS); //first two coords centre,3&4 width/2 and height/2
  fvalues[0]=1.0;
  fvalues[1]=0.0;
  fvalues[2]=0.32;
  cols[0] = 255;
  cols[1]=0;
  cols[2]=150;
  positions[0] = 50;
  positions[1]=40;
  stardata[0]=2;
  stardata[1]=4;
  stardata[2]=3;
  stardata[3]=5;
  /* start oscP5, listening for incoming messages at recvPort */
  oscP5 = new OscP5(this, recvPort);
  background(0);
}

void draw() {
  background(0);
  pushMatrix();
  //calculate changing xy offsets: shiftflag set to 0 to siwtch this off
  xoffset += 10*xdirflag*shiftflag;
  yoffset += 10*ydirflag*shiftflag;
  if(shiftflag==0){xoffset=0;yoffset=0;} //reset offset values to zero if shifting is off
  //reverse directions of shifting when limits reached
  if (xoffset >width/3){xdirflag=-1;}
  if (xoffset < -width/3){xdirflag=1;} if (yoffset > height/3){ydirflag=-1;}
  if (yoffset < -height/3){ydirflag=1;}
  //transform to new shift settings
  translate(width/2+xoffset, height/2+yoffset); //half of screen width and height (ie centre) plus shift values

  //optional rotate set by OSC call
  rotate(float(rotval)*(2*PI)/360);
  
  //get limits for stroke values and audiThreshold from OSC data received
  STROKE_MIN=fvalues[0];
  STROKE_MAX=fvalues[1];
  audioThresh=fvalues[2]; 
  //println("fvalues: ",STROKE_MIN,STROKE_MAX,audioThresh); //for debugging

  // Push new audio samples to the FFT
  fftLog.forward(input.left);

  // Loop through frequencies and compute width for current shape stroke widths, and amplitude for size
  for (int i = 0; i < 29; i++) {

    // What is the average height in relation to the screen height?
    float amplitude = fftLog.getAvg(i);

    // If we hit a threshold, then set the "circle" radius to new value (originally circles, but applies to other shapes used)
    if (amplitude < audioThresh) { circles[i] = amplitude*(height/2); } else { // Otherwise, decay slowly circles[i] = max(0, min(height, circles[i]-DECAY_RATE)); } pushStyle(); // Set colour and opacity for this shape circle. (opacity depneds on amplitude) if (1>random(2)) {
      stroke(cols[0], cols[1], cols[2], amplitude*255);
    } else {
      stroke(cols[1], cols[2], cols[0], amplitude*255);
    }
    strokeWeight(map(amplitude, 0, 1, STROKE_MIN, STROKE_MAX)); //weight stroke according to amplitude value

    if (shapes.length()>1) { //if more than one shape being drawn, set two to 0 to draw them concentrically
      two = 0;
    } else {
      two = 1;
    }
    // draw current shapes
    if (shapes.contains("e")) {
      // Draw an ellipse for this frequency
      ellipse(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, 1.4*circles[i], circles[i]);
    }
    if (shapes.contains("r")) {
      rectMode(RADIUS); 
      rect( random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, 1.4*circles[i], circles[i]);
    }
    if (shapes.contains("s")) {
      strokeWeight(3); //use fixed stroke weight when drawing stars
      //star data Xcentre,Ycentre,radius1,radius2,number of points
      star(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, circles[i]*stardata[0]*0.25, circles[i]*stardata[1]*0.25, int(stardata[2]+random(stardata[3])));
    }
    popStyle();

    //System.out.println( i+" "+circles[i]); //for debugging
  } //end of for loop
  popMatrix();
}

void oscEvent(OscMessage msg) { //function to receive and parse OSC messages
  System.out.println("### got a message " + msg);
  System.out.println( msg);
  System.out.println( msg.typetag().length());

  if (msg.checkAddrPattern("/viz/float")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      fvalues[i] = msg.get(i).floatValue();
      System.out.print("float number " + i + ": " + msg.get(i).floatValue() + "\n");
    }
  }

  if (msg.checkAddrPattern("/viz/pos")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      positions[i] = msg.get(i).intValue();
      System.out.print("pos number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }

  if (msg.checkAddrPattern("/viz/col")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      cols[i] = msg.get(i).intValue();
      System.out.print("col number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }
  if (msg.checkAddrPattern("/viz/shapes")==true) {
    shapes=msg.get(0).stringValue();
    //for(int i =0; i<msg.typetag().length(); i++) {
    // shapes += msg.get(i).stringValue().toLowercase();      
    //}
    System.out.print("shapes code "+ shapes + "\n");
  }
  if (msg.checkAddrPattern("/viz/stardata")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      stardata[i] = msg.get(i).intValue();
      System.out.print("stardata number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }
  if (msg.checkAddrPattern("/viz/rotval")==true) {
    rotval =msg.get(0).intValue();
    System.out.print("rotval code "+ rotval + "\n");
  }
  if (msg.checkAddrPattern("/viz/shift")==true) {
    shiftflag =msg.get(0).intValue();
    System.out.print("shiftflag code "+ shiftflag + "\n");
  }
  if (msg.checkAddrPattern("/viz/stop")==true) {
    kill(); //stop Sonic Pi from running
  }
}

//function to draw a star (and polygons)
void star(float x, float y, float radius1, float radius2, int npoints) {
  float angle = TWO_PI / npoints;
  float halfAngle = angle/2.0;
  beginShape();
  for (float a = 0; a < TWO_PI; a += angle) {
    float sx = x + cos(a) * radius2;
    float sy = y + sin(a) * radius2;
    vertex(sx, sy);
    sx = x + cos(a+halfAngle) * radius1;
    sy = y + sin(a+halfAngle) * radius1;
    vertex(sx, sy);
  }
  endShape(CLOSE);
}

void kill(){ //function to send stop message to Sonic Pi on local machine
  OscMessage myMessage = new OscMessage("/stop-all-jobs");
   myMessage.add("RBN_GUID"); //any value here. Need guid to make Sonic PI accept command
  oscP5.send(myMessage, myRemoteLocation); 
}

A modified Sonic Pi driver program is shown below.

#Program to drive Sonic Pi 3 visualiser written in "processing"
#by Robin Newman, September 2017
#see article at https://rbnrpi.wordpress.com
#This program for resizeable version of visualiser
#set up OSC address of processing sketch
use_osc '127.0.0.1',5000
#select shapes to show
osc "/viz/shapes","se"  #"s" "e" "r" Star,Ellipse, Rectangle or combination
sleep 0.1

live_loop :c do
  #choose starting colour for shapes
  osc "/viz/col",rrand_i(0,64),rrand_i(128,255),rrand_i(0,255)
  sleep 0.1
end

live_loop :f do
  #set Stroke max min widths and audioThreshold
  osc "/viz/float",([8.0,5.0,3.0].choose),[1.0,2.0].choose,(0.4+rand(0.3))
  sleep 2
end

#set range of random positional offset (can be 0,0)
#automatically disabled when showng more than one shape
osc "/viz/pos",10,0

#control "bouncing" shapes around the screen 1 for on 0 for off
osc "/viz/shift",0

live_loop :s do
  #setup star data inner/outer circle radius, number of points
  #and random variation of number of points
  osc "/viz/stardata",[1,2,3].choose,[1,2,4].choose,5,1
  sleep 2
end

rv=0 #variable for current rotation
live_loop :r do
  rv+=5*[1,1].choose # choose rotation increment
  rv=rv%360
  osc "/viz/rotval",rv #change rv to 0 to disable rotation
  sleep 0.1
end

#Now setup the sounds to play which will trigger the visualiser
use_bpm 60
set_volume! 5
use_random_seed 999

with_fx :level do |v|
  control v,amp: 0 #control the volume using fx :level
  sleep 0.1
  
  in_thread do #this loop does the volume control
    control v,amp: 1,amp_slide: 10 #fade in
    sleep 140
    control v,amp: 0,amp_slide: 10 #fade out
    sleep 10
    osc "/viz/stop" #send /viz/stop OSC message to sketch
    #sketch sends back a /stop_all_jobs command to port 4557
  end
  
  #  This drum loop is written by Eli see https://groups.google.com/forum/#!topic/sonic-pi/u71MnHnmkVY
  #  used with his permission. I liked it, and it has good percussive output
  #  to drive a visualiser
  live_loop :drums do
    this_sample = [:loop_compus, :loop_tabla, :loop_safari].ring
    start = [ 0.0 , 0.125 , 0.25 , 0.375 , 0.5 , 0.625 , 0.75 , 0.875 ].ring
    sample this_sample.look , beat_stretch: 4, start: start.look, rate: 0.5
    sleep 1
    tick
  end
end

If you are using the original version, I suggest you alter the line that calls the star drawing routine as for the resizable version, and use the new Sonic Pi driver program with it.

You can download the new versions here

The original article shows how to set the system up.

A visualiser for Sonic Pi 3

Every since Sonic PI had a transparency mode added (not available on the Raspberry Pi versions) I have been interested in adding a Visualiser graphics display as a backdrop. The built in Scope can give attractive displays, but I wanted something with a bit more colour, and which covered the whole screen. I did some experiments with the iTunes visualiser which added quite a nice backdrop, but only with a significant delay between the audio and the fluctuations of the display. The recent arrival of Sonic Pi 3 allowed for further possibilities, because it enabled the use of OSC messaging. I did a trawl of the internet and came across a promising looking processing sketch which produced a pattern which reacted to incoming audio. It was written several years ago and was only monochrome based. I played around with this, upgrading it to work on the latest version of Processing, and added some colour. I experimented with adding further  basic shapes to the display (it originally used an ellipse primitive, set up to give concentric circles, which could also be driven to produce a star burst effect). I added rectangles and star shapes and experimented with off-setting these as the program ran, and also with using more than one basic shape at the same time. I then added some code so that the sketch could receive incoming OSC messages sent from Sonic Pi 3, which could be used to control various parameters for the shapes, such as stroke width, colour, and offsets on the screen. I added further flexibility such as the ability to rotate the shapes, and to shift the whole display vertically and horizontally across the screen. The final setup works well with Sonic Pi 3, which controls the patterns both with the Audio signal it produces, and also with OSC messages which can be sent to the sketch display code. These can be timed appropriately with the musical content.

The code for the sketch is shown below

//Visualiser for use with Sonic Pi 3 written by Robin Newman, September 2017
// based on an original sketch https://github.com/andrele/Starburst-Music-Viz
//Changes: changed to full screen, updated for Processing 3, added colour, added rectangle and star shapes
//added OSC input (from Sonic Pi) to alter parameters as it runs, removed slider inputs.
//input OSC:   /viz/float       updates STROKE_MAX, STROKE_MIN and audioThresh
//             /viz/pos         updates XY random offset values (can be zero)
//             /viz/col         updates RGB values
//             /viz/shapes      sets shapes to be used from S, E and R (or combinations thereof)
//             /viz/stardata    sets data for star shapes
//             /viz/rotval      turns rotation of shapes on/off
//             /viz/shift       turns XY shift across screen on/off
//             /viz/stop        initiates sending stop all signal back to Sonic Pi port 4557

import ddf.minim.analysis.FFT;
import ddf.minim.*;
import oscP5.*; //to support OSC server
import netP5.*;

Minim minim;
AudioInput input;
FFT fftLog;

int recvPort = 5000; //can change to whatever is convenient. Match with use_osc comand in Sonic Pi
OscP5 oscP5;
NetAddress myRemoteLocation; //used to send stop command b ack to Sonic PI

// Setup params
color bgColor = color(0, 0, 0);

// Modifiable parameters
float STROKE_MAX = 10;
float STROKE_MIN = 2;
float audioThresh = .9;
float[] circles = new float[29];
float DECAY_RATE = 2;
//variables for OSC input
float [] fvalues = new float[5]; //STROKE_MAX, STROKE_MIN,audioThresh values
int [] cols = new int[3]; //r,g,b colours
int [] positions = new int[2];// random offset scales for X,Y
int [] stardata = new int[4];// data for star shape, number of points, random variation
int shiftflag = 0; //flag to control xy drift across the screen set by OSC message
int two = 0; //variable to force concentric shapes when more than one is displayed
String shapes = "E"; //shapes to be displayed, including multiples from S,E,R
int rotval =0;
int xoffset = 0,yoffset = 0;
int xdirflag = 1,ydirflag = 1;

void settings() {
  fullScreen(P3D);
}

void setup() {  
  frameRate(60);
   myRemoteLocation = new NetAddress("127.0.0.1",4557); //address to send commands to Sonic Pi
  minim = new Minim(this);
  input = minim.getLineIn(Minim.MONO, 2048); //nb static field MONO referenced from class not instance hence Minim not minim

  fftLog = new FFT( input.bufferSize(), input.sampleRate()); //setup logarithmic fast fourier transform
  fftLog.logAverages( 22, 3); // see http://code.compartmental.net/minim/fft_method_logaverages.html

  noFill();
  ellipseMode(RADIUS); //first two coords centre,3&4 width/2 and height/2
  fvalues[0]=1.0;
  fvalues[1]=0.0;
  fvalues[2]=0.32;
  cols[0] = 255;
  cols[1]=0;
  cols[2]=150;
  positions[0] = 50;
  positions[1]=40;
  stardata[0]=2;
  stardata[1]=4;
  stardata[2]=3;
  stardata[3]=5;
  /* start oscP5, listening for incoming messages at recvPort */
  oscP5 = new OscP5(this, recvPort);
  background(0);
}

void draw() {
  background(0);
  pushMatrix();
  //calculate changing xy offsets: shiftflag set to 0 to siwtch this off
  xoffset += 10*xdirflag*shiftflag;
  yoffset += 10*ydirflag*shiftflag;
  if(shiftflag==0){xoffset=0;yoffset=0;} //reset offset values to zero if shifting is off
  //reverse directions of shifting when limits reached
  if (xoffset >displayWidth/3){xdirflag=-1;}
  if (xoffset < -displayWidth/3){xdirflag=1;} if (yoffset > displayHeight/3){ydirflag=-1;}
  if (yoffset < -displayHeight/3){ydirflag=1;}
  //transform to new shift settings
  translate(displayWidth/2+xoffset, displayHeight/2+yoffset); //half of screen width and height (ie centre) plus shift values

  //optional rotate set by OSC call
  rotate(float(rotval)*(2*PI)/360);
  
  //get limits for stroke values and audiThreshold from OSC data received
  STROKE_MIN=fvalues[0];
  STROKE_MAX=fvalues[1];
  audioThresh=fvalues[2]; 
  //println("fvalues: ",STROKE_MIN,STROKE_MAX,audioThresh); //for debugging

  // Push new audio samples to the FFT
  fftLog.forward(input.left);

  // Loop through frequencies and compute width for current shape stroke widths, and amplitude for size
  for (int i = 0; i < 29; i++) {

    // What is the average height in relation to the screen height?
    float amplitude = fftLog.getAvg(i);

    // If we hit a threshold, then set the "circle" radius to new value (originally circles, but applies to other shapes used)
    if (amplitude < audioThresh) { circles[i] = amplitude*(displayHeight/2); } else { // Otherwise, decay slowly circles[i] = max(0, min(displayHeight, circles[i]-DECAY_RATE)); } pushStyle(); // Set colour and opacity for this shape circle. (opacity depneds on amplitude) if (1>random(2)) {
      stroke(cols[0], cols[1], cols[2], amplitude*255);
    } else {
      stroke(cols[1], cols[2], cols[0], amplitude*255);
    }
    strokeWeight(map(amplitude, 0, 1, STROKE_MIN, STROKE_MAX)); //weight stroke according to amplitude value

    if (shapes.length()>1) { //if more than one shape being drawn, set two to 0 to draw them concentrically
      two = 0;
    } else {
      two = 1;
    }
    // draw current shapes
    if (shapes.contains("e")) {
      // Draw an ellipse for this frequency
      ellipse(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, 1.4*circles[i], circles[i]);
    }
    if (shapes.contains("r")) {
      rectMode(RADIUS); 
      rect( random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, 1.4*circles[i], circles[i]);
    }
    if (shapes.contains("s")) {
      strokeWeight(3); //use fixed stroke weight when drawing stars
      //star data Xcentre,Ycentre,radius1,radius2,number of points
      star(random(-1, 1)*amplitude*positions[0]*two, random(-1, 1)*amplitude*positions[1]*two, circles[i]*stardata[0], circles[i]/stardata[1], int(stardata[2]+random(stardata[3])));
    }
    popStyle();

    //System.out.println( i+" "+circles[i]); //for debugging
  } //end of for loop
  popMatrix();
}

void oscEvent(OscMessage msg) { //function to receive and parse OSC messages
  System.out.println("### got a message " + msg);
  System.out.println( msg);
  System.out.println( msg.typetag().length());

  if (msg.checkAddrPattern("/viz/float")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      fvalues[i] = msg.get(i).floatValue();
      System.out.print("float number " + i + ": " + msg.get(i).floatValue() + "\n");
    }
  }

  if (msg.checkAddrPattern("/viz/pos")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      positions[i] = msg.get(i).intValue();
      System.out.print("pos number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }

  if (msg.checkAddrPattern("/viz/col")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      cols[i] = msg.get(i).intValue();
      System.out.print("col number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }
  if (msg.checkAddrPattern("/viz/shapes")==true) {
    shapes=msg.get(0).stringValue();
    //for(int i =0; i<msg.typetag().length(); i++) {
    // shapes += msg.get(i).stringValue().toLowercase();      
    //}
    System.out.print("shapes code "+ shapes + "\n");
  }
  if (msg.checkAddrPattern("/viz/stardata")==true) {
    for (int i =0; i<msg.typetag().length(); i++) {
      stardata[i] = msg.get(i).intValue();
      System.out.print("stardata number " + i + ": " + msg.get(i).intValue() + "\n");
    }
  }
  if (msg.checkAddrPattern("/viz/rotval")==true) {
    rotval =msg.get(0).intValue();
    System.out.print("rotval code "+ rotval + "\n");
  }
  if (msg.checkAddrPattern("/viz/shift")==true) {
    shiftflag =msg.get(0).intValue();
    System.out.print("shiftflag code "+ shiftflag + "\n");
  }
  if (msg.checkAddrPattern("/viz/stop")==true) {
    kill(); //stop Sonic Pi from running
  }
}

//function to draw a star (and polygons)
void star(float x, float y, float radius1, float radius2, int npoints) {
  float angle = TWO_PI / npoints;
  float halfAngle = angle/2.0;
  beginShape();
  for (float a = 0; a < TWO_PI; a += angle) {
    float sx = x + cos(a) * radius2;
    float sy = y + sin(a) * radius2;
    vertex(sx, sy);
    sx = x + cos(a+halfAngle) * radius1;
    sy = y + sin(a+halfAngle) * radius1;
    vertex(sx, sy);
  }
  endShape(CLOSE);
}

void kill(){ //function to send stop message to Sonic Pi on local machine
  OscMessage myMessage = new OscMessage("/stop-all-jobs");
   myMessage.add("RBN_GUID"); //any value here. Need guid to make Sonic PI accept command
  oscP5.send(myMessage, myRemoteLocation); 
}

The basis of a visualiser depends on doing a fast fourier transform analysis of the incoming audio, and calculating amplitudes related to the different frequency components of the audio, which is continuously monitored input buffer by input buffer. I don’t intend togo anywhere near the complex mathematics involved, but there are a lot of useful articles at different levels which you can read on the subject. I quite liked this one on the Fourier Transform. https://betterexplained.com/articles/an-interactive-guide-to-the-fourier-transform/. Also it is useful to look at the documentation of the minim library analysis section http://code.compartmental.net/minim/index_analysis.html if you want more detailed information on the calls employed.
You may find it easier to look at the original sketch from which I started https://github.com/andrele/Starburst-Music-Viz before taking on board the additions I have added to make the sketch more flexible and wide ranging.
I have added various transformations. Starting at the beginning of the main draw() function there are x and y offsets which increase each time the loop iterates, until a maximum offset is reached, when they reverse in direction. This causes the shapes to move smoothly left and right and up and down the screen. a shiftflag eanbles this to be siwtched on and off by one of the control OSC messages.
There follows an optional rotate command which can rotate the axis of the shapes being drawn, again controlled by an incoming OSC message.
Next values for setting the limits on the stroke size being used to render the shapes are read from data received by an OSC message, together with an audioThreshold setting.
A buffer from the input audio is now processed, and amplitude values for different frequency ranges are stored in an array of “circle” settings. NB the name circles is used for the variable as this was the only shape used in the original.Perhaps it might be better names shapes() now as there are three different classes of shapes used. A new value for the “radius” is stored if it is less than the audioThreshold setting, otherwise, a “decayed” value of the currently stored value from the previous iteration is stored. (suitable minimum values are set).
rgb colour values are set using values received from an OSC message, and these are then swapped at random, before setting the stroke colour attributes  to be used.
Next a flag two is set according to whether  one or more than one shapes have been selected. In the latter case the shapes are forced to be drawn concentrically, by nullifying the offset values by setting two = 0. The selected shapes are then drawn, before the loop starts a further iteration.

The oscEvent(OscMessage msg) function is triggered when an OSC message is received. It contains sections to parse the various OSC messages which can be sent from Sonic PI to the processing sketch. The parameters for each command are used to update various lists containing information used by the draw function.eg cols[ ] holds rgb values, startdata[ ] holds the parameters for the star shapes, fvalues[ ] holds the floating point values for the STROKE_MAX, STROKE_MIN  and audioThresh settings. These and other similar settings are updated when the relevant OSC messages are received, so Sonic PI can control the operation of the sketch as it runs.
The star functions draws the star shape. It is lifted straight from the star example here https://processing.org/examples/star.html.
The final function kill( ) is used to send a /stop-all-jobs OSC message back to Sonic PI to stop all programs running on that machine. It can be triggered by a /viz/stop OSC message being sent from Sonic PI to the Sketch.

As far as Sonic Pi is concerned, it produces two sorts of input. First the audio that it produces is fed back to the sketch which looks at the default audio input. On my Mac I used the program Loopback ( https://rogueamoeba.com/loopback/ ) to perform this. This is a paid for program, but you can use it for free for 10 minutes or so at a time.It is based on the previous free SoundFlower utility, but this has not been fully updated for recent MacOS and you may find it difficult to get it to work instead. The setup I used is shown below:Note that Sonic Pi is added as a Source and that its output is monitored through the speakers, so that the audio is fed both there and to the processing sketch. This loopbackdevice will appear in the list of devices in the MIDI audio setup program, and should be selected as the default input device,

Secondly, Sonic PI is used to send OSC messages to the processing sketch to control its various parameters. this can be done by sending “one off” messages or by putting the message sender into a live loop, sending messages at regular intervals. An example is shown below, where the OSC messages are combined with the program producing the music, but they can  be run as a separate program in a different buffer, which is an advantage if you are visualising a linear piece and do not want to restart it every time you press run to update the OSC messages sent.

#Program to drive Sonic Pi 3 visualiser written in "processing"
#by Robin Newman, September 2017
#see article at https://rbnrpi.wordpress.com
#set up OSC address of processing sketch
use_osc '127.0.0.1',5000
#select shapes to show
osc "/viz/shapes","e"  #"s" "e" "r" Star,Ellipse, Rectangle or combination
sleep 0.1

live_loop :c do
  #choose starting colour for shapes
  osc "/viz/col",rrand_i(0,64),rrand_i(128,255),rrand_i(0,255)
  sleep 0.1
end

live_loop :f do
  #set Stroke max min widths and audioThreshold
  osc "/viz/float",([8.0,5.0,3.0].choose),[1.0,2.0].choose,(0.4+rand(0.3))
  sleep 2
end

#set range of random positional offset (can be 0,0)
#automatically disabled when showng more than one shape
osc "/viz/pos",20,0

#control "bouncing" shapes around the screen 1 for on 0 for off
osc "/viz/shift",0

live_loop :s do
  #setup star data inner/outer circle radius, number of points
  #and random variation of number of points
  osc "/viz/stardata",[1,2].choose,[1,2,3].choose,5,2
  sleep 4
end

rv=0 #variable for current rotation
live_loop :r do
  rv+=5*[-8,1].choose # choose rotation increment
  rv=rv%360
  osc "/viz/rotval",rv #change rv to 0 to disable rotation
  sleep 0.1
end

#Now setup the sounds to play which will trigger the visualiser
use_bpm 60
set_volume! 5
use_random_seed 999

with_fx :level do |v|
  control v,amp: 0 #control the volume using fx :level
  sleep 0.1
  
  in_thread do #this loop does the volume control
    control v,amp: 1,amp_slide: 10 #fade in
    sleep 140
    control v,amp: 0,amp_slide: 10 #fade out
    sleep 10
    osc "/viz/stop" #send /viz/stop OSC message to sketch
    #sketch sends back a /stop_all_jobs command to port 4557
  end
  
  #This drum loop is written by Eli see https://groups.google.com/forum/#!topic/sonic-pi/u71MnHnmkVY
  #used with his permission. I liked it, and it has good percussive output
  #to drive a visualiser
  live_loop :drums do
    this_sample = [:loop_compus, :loop_tabla, :loop_safari].ring
    start = [ 0.0 , 0.125 , 0.25 , 0.375 , 0.5 , 0.625 , 0.75 , 0.875 ].ring
    sample this_sample.look , beat_stretch: 4, start: start.look, rate: 0.5
    sleep 1
    tick
  end
end

As this program runs, you can alter the parameters eg shape(s) chosen, shift parameter etc. and rerun. Effectively live coding the visualiser. The program uses a great percussion loop written by Eli, which he said I might use. https://groups.google.com/forum/#!topic/sonic-pi/u71MnHnmkVY

There is a video of the program in operation on youtube here

and you can download the code for the sketch and Sonic Pi programs here

You can download and install Processing from https://processing.org/download/

You will also have to install the libraries Minim and oscP5 from the Sketch=>Import Library… menu.
To use, set up the loopback audio and select it as the default input device. Load and play the sketch in processing, then run the Sonic Pi program on the same computer. You can adjust the transparency of the Sonic Pi screen on the Preferences Visuals tab to make it semi transparent, or minimise Sonic Pi once it is running, if you are not going to do live coding of the control OSC messages.

Sonic Pi 3 drives the LEDs on pi-topPULSE module

The new pi-topPULSE module contains a 7×7 LED array, a microphone and an audio amplifier. The latter can be used by Sonic Pi as its output audio path, and Sonic PI version 3 can also be used to control the LEDs on the PULSE module using OSC messages, which can be received by a python program which decodes them, and uses the commands to control the LEDs.

I have written a full article giving a detailed explanation of how this is done, which you can read here.

There is also a link in the article to the code which is used, and to a video of the program in action on youtube.