Sonic-Pi controlled conversation between two McRoboFaces

IMG_4843

Following on from my previous project with a single McRoboFace, 4Tronix have kindly supplied me with a second face to enable me to develop the idea to control two McRoboFaces with Sonic Pi. I have amended the previous project to feed the outputs of the left and right audio channels to two separate adc inputs on the piconzero board, and daisy chained the two McFaces (you merely connect the Dout pin of the first to the Din pin of the second) and then address the leds on the second McRoboFace with an offset of 17. I have developed routines in the python driver program to control each face separately. Each mouth can be set to a fixed position: closed, open, smile or sad, or can be fed from the audio input via the adc, so that it is triggered to open when the signal exceeds a preset threshold.

In order to provide greater control, and to synchronise it to the audio feed from Sonic Pi, I have added Ruby routines to the Sonic Pi program which can send text strings to the python program via a text file. These strings can set the mouth state for each face, and also alter the colours of the leds. because there is only a common brightness setting for both faces (using pwm) If only one face is receiving audio I use that output to control the brightness of both faces. If both faces are set to receive audio then I set the brightness at a fixed value.

The conversation is entirely controlled from Sonic Pi. It plays the audio for each face via a series of pre-recorded samples, and plays each face with a separate audio channel by setting the pan: value either to -1 or to 1. Before each sample is played, control signals are sent via the text file to set up the required state for each face. At the end of the presentation both faces receive audio input together as they “sing” along to a round of Frere Jaques. Finally a control signal is sent to reduce the brightness to zero, effectively switching off all the leds.

Writing and reading the data via a text file is perhaps not the most elegant way to do things, but does seem to work OK. I used a technique I developed previously when reading in large numbers of sample files to “hold up” the Sonic Pi program utilising a cue and sync while the writing completes. Otherwise you can run into “too far behind errors”. On the reception side, at the start of the main program loop the python program polls for the existence of the text file, and if it finds one, reads the data, then deletes the file. It then alters parameters according to the received data. It took quite a lot of experimentation to get the timings and consistent operation of the two programs correct, but having done so, the final system is quite stable. I boost the audio levels to amp: 4 in Sonic Pi, which gives a good signal for the adc inputs to latch on to.

Setup is fairly straight forward. The calibrate button used in the single face project us utilised again, and sets separate offsets for each channel, and the code used to modulate the mouths is very similar to that used in the previous project. Once set, the Sonic Pi program can be run several times, leaving the python program running continuously..

I have enjoyed this project, which had brought together Sonic Pi, Ruby and Python in an interesting way, not to mention recording and processing the samples with Audacity., and I hope you enjoy the video of the final system. I hope it may be possible to write up teh system more fully in the future, but it will be quite a big job to do so.

You can see the video here

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s