A TouchOSC JukeBox for SonicPi

I have been working on a project to produce a TouchOSC driven jukebox program for Sonic Pi, so that I can run it headless.

I have made a video of the project which you can see here

There is a full article including links to the code which is here

Advertisements

Sonic Pi 3 drives the LEDs on pi-topPULSE module

The new pi-topPULSE module contains a 7×7 LED array, a microphone and an audio amplifier. The latter can be used by Sonic Pi as its output audio path, and Sonic PI version 3 can also be used to control the LEDs on the PULSE module using OSC messages, which can be received by a python program which decodes them, and uses the commands to control the LEDs.

I have written a full article giving a detailed explanation of how this is done, which you can read here.

There is also a link in the article to the code which is used, and to a video of the program in action on youtube.

Sonic Pi 3 Synth Driver

Following on from the PS3 project, I used some of the same code, and expanded and modified it to produce a project I entitled Sonic Pi 3 Synth Driver. This incorporates the “Long Note” synth introduced in the PS3 project, which produces a continuously sounding synth note, which can have various characteristics such as synth, pitch, cutoff, volume and reverb altered. The project drives Sonic Pi3 using OSC messages from TouchOSC running on an iPad. Full details are in this article which includes links to a video and to the code used.
TouchOSC is available on the App store, and is also available for Android.

Control Sonic Pi 3 with a PS3 wireless controller

Recently I published a video on YouTube showing Sonic Pi 3 being controlled by a PS3 wireless games controller. This post contains a link to the software required to do this, should you wish to try it out.

Two files are included. The first, ps3.py should be run on a Raspberry Pi, into which the wireless dongle of the PS3 controller is plugged. Two pieces of supporting software need to be installed first.

sudo apt-get update
sudo apt-install joystick
sudo pip3 install python-osc

The second software file is a Sonic Pi 3 script, which should be loaded into a buffer on Sonic Pi 3, which can be running either on the Raspberry Pi, or on an external computer.

NB if you want to run Sonic Pi 3 on the Raspberry Pi you need to wait for the Raspbian Stretch to be released, or upgrade a new copy of Rasbian Jessie to Stretch yourself. The released version of Sonic Pi 3 will NOT run on Raspbian Jessie
This is specified on the command line when running the ps3.py script by adding an argument –sp xxx.xxx.xxx.xxx where xxx.xxx.xxx.xxx is the IP address of the computer running Sonic Pi.
In addition, on the IO preferences in Sonic Pi you must make sure that the option to receive remote OSC messages is ticked

To run the ps3.py file, make sure it is executable by typing:
chmod 755 ps3.py
then type
./ps3.py     or        ./ps3.py –sp xxx.xxx.xxx.xxx        Putting in the appropriate IP address of your remote Sonic Pi 3 computer.
As an alternative you can type python3 ps3.py
or         python3 ps3.py –sp xxx.xxx.xxx.xxx

The software can be downloaded from here

WiFi to ethernet adapter for an ethernet-ready TV New Version Published

Today I have published a new version of one of the most popular projects on this blog to create a WiFi to ethernet adapter for an ethernet-ready TV. The new version is written for the latest Raspbian distribution Lite or “pixel” published on 2016-09-23, and reflects changes in the way in which ip addresses are now handled using systemd.

Full details can be found here

Sonic-Pi controlled conversation between two McRoboFaces

IMG_4843

Following on from my previous project with a single McRoboFace, 4Tronix have kindly supplied me with a second face to enable me to develop the idea to control two McRoboFaces with Sonic Pi. I have amended the previous project to feed the outputs of the left and right audio channels to two separate adc inputs on the piconzero board, and daisy chained the two McFaces (you merely connect the Dout pin of the first to the Din pin of the second) and then address the leds on the second McRoboFace with an offset of 17. I have developed routines in the python driver program to control each face separately. Each mouth can be set to a fixed position: closed, open, smile or sad, or can be fed from the audio input via the adc, so that it is triggered to open when the signal exceeds a preset threshold.

In order to provide greater control, and to synchronise it to the audio feed from Sonic Pi, I have added Ruby routines to the Sonic Pi program which can send text strings to the python program via a text file. These strings can set the mouth state for each face, and also alter the colours of the leds. because there is only a common brightness setting for both faces (using pwm) If only one face is receiving audio I use that output to control the brightness of both faces. If both faces are set to receive audio then I set the brightness at a fixed value.

The conversation is entirely controlled from Sonic Pi. It plays the audio for each face via a series of pre-recorded samples, and plays each face with a separate audio channel by setting the pan: value either to -1 or to 1. Before each sample is played, control signals are sent via the text file to set up the required state for each face. At the end of the presentation both faces receive audio input together as they “sing” along to a round of Frere Jaques. Finally a control signal is sent to reduce the brightness to zero, effectively switching off all the leds.

Writing and reading the data via a text file is perhaps not the most elegant way to do things, but does seem to work OK. I used a technique I developed previously when reading in large numbers of sample files to “hold up” the Sonic Pi program utilising a cue and sync while the writing completes. Otherwise you can run into “too far behind errors”. On the reception side, at the start of the main program loop the python program polls for the existence of the text file, and if it finds one, reads the data, then deletes the file. It then alters parameters according to the received data. It took quite a lot of experimentation to get the timings and consistent operation of the two programs correct, but having done so, the final system is quite stable. I boost the audio levels to amp: 4 in Sonic Pi, which gives a good signal for the adc inputs to latch on to.

Setup is fairly straight forward. The calibrate button used in the single face project us utilised again, and sets separate offsets for each channel, and the code used to modulate the mouths is very similar to that used in the previous project. Once set, the Sonic Pi program can be run several times, leaving the python program running continuously..

I have enjoyed this project, which had brought together Sonic Pi, Ruby and Python in an interesting way, not to mention recording and processing the samples with Audacity., and I hope you enjoy the video of the final system. I hope it may be possible to write up teh system more fully in the future, but it will be quite a big job to do so.

You can see the video here