LightSeq  is a stand-alone instrument and performance system for live electronic music. It was inspired by the project Lightefface, which translates light into control signal for sound. The LightSeq is more advanced, because it also generates sound and has a built-in light sequencer, which influences the resulting sound (and looks cool) and is controlled by the same software that generates audio.


What have you made?

I have made a stand-alone instrument and sequencer for live electronic music performance. 
LightSeq consists of 3 main parts. First, we have the main board with a raspberry Pi computer running Linux, this is kind of a ‘brain’ of the whole system. On Raspberry Pi we are running Pure Data software, which is responsible for communication with Arduinos and generating and manipulating sound. On the main board there are 26 potentiometers, 18 buttons, 3 rotary switches and 3 regular switches. Their values are read by Arduino and sent to Pure Data. All the controllers are divided into 3 sections: audio mixer and master, sequencer control, and assignable controllers for sound manipulation. You can think of this main box as of a computer + sound card + midi controller (made with Arduino) in one box, which is dedicated to live music performance. There is one big difference though – there is no screen, which forces the user to memorize what parameters are controlled by which controller and makes it more of an instrument then a regular laptop. Just this part could work on its own as a stand-alone electronic music instrument, but there are two other parts that make it more unique. Apart from regular controllers (knobs, buttons and switches) on the main box there is another box that contains 32 light sensors. Their values are read by Arduino board and also sent to Pure Data. These values are then used for generating and manipulating the sound. Finally we have a sequencer, but it’s not your regular sequencer, it is a light sequencer. It consists of 16 rgb LED lights that are controlled by Pure Data (via Arduino), which in turn influences the light sensors values and thus resulting music. The sequencer can run very fast (update time is less then 10 ms), each LED can generate any color and there are infinite ways in which it can be programmed. At the moment, there are 5 different sequencer modes (regular, random, polyphonic, manual and out of phase sequence) but new ones can be added easily. Each light is controlled individually, so we can have any number of LEDs on at any given time.

What gave you the initial inspiration?

LightSeq was strongly influenced and inspired by my previous project: Lightefface, which is an Arduino - based interface that uses light sensors to translate the amount of light into control signal which is used for sound control and manipulation in a life performance.
After developing Lightefface, I was playing with it a lot and noticed that all the light sources that I used were irregular, out of sync or just on all the time. This resulted in irregular, out of sync or sustained sound. I don’t think it’s a bad thing (I like irregular rhythms), but I though it would be nice to open a possibility of sequencing events in more structured way. This is how I came up with the idea and then developed a prototype of the LightSeq. It was just a simple led sequencer built with an Arduino board. It didn’t generate any sound. It had 12 steps, various modes of operation, 1 to 6 polyphony, amount of randomness and other features. I liked the result and decided to develop it further. In the meantime, I’ve learned about the Raspberry Pi and its possibilities, and I decided to use it for my project which also allowed me to add sound to the whole system, making it a stand-alone system for live electronic music, which it is now.

What is the original idea behind this project?

There are two main ideas. First of all, I always wanted to get rid of a laptop in my live performances. I would usually hide it under the table when I used it. I am really allergic to the so-called laptop performances, during which a performer sits still for 45 minutes in front of his laptop screen with his face illuminated by the bluish computer light. Even if the music is amazing, there is so much lacking from this kind of performance in my opinion. In the case of my project, the laptop is completely gone, so the performer is freed from the screen. But with this freedom comes responsibility, he has to know what he is doing, and what is happening in the system. This aspect brings it closer to a traditional understanding of an instrument, which has to be practiced in order to be mastered. Another idea is the one I explored already in my previous project: Lightefface. It is the possibility of translating the amount of light into a control signal, which can then be used for generating and manipulating sound in a live performance. More generally, it’s about exploring the relation between the sonic and visual domain.

How does it work?

First, in the audio section of the main box you can choose one of the 5 available sound patches in order to get some sound. You switch on the patch just by bringing up its level. Each sound patch has some parameters that you can control using the assignable controls. For example, in the first patch you can control the length of the decay, the amount of the second oscillator and it’s detuning as well as the threshold value for the sensors. Once you have the sound going you can start experimenting with the light sequencer control. In order for this to work, you have to place some of the LED blocks on the sensor board. Then you can explore the various modes of the sequencer: make it run in a random or regular fashion, change the amount of steps, starting and ending position and the amount of lights that are on simultaneously, you can also switch to a manual mode, in which you advance to the next step by hand, or to a mode where all lights are out of sync by very short time period, creating interesting patterns over time. All these actions will result in the sound changing in reaction to the lights, and this is where it’s getting really interesting. You can, of course, combine more sound patches and see how they work with the settings of the light sequencer. Finally, you can move around the actual physical light blocks. This will change the resulting sound but the light sequence (if it’s in regular mode) will stay the same. For more advanced users there is a possibility of programming and uploading your own sounds or new ways of controlling the sequencer. 

How long did it take to make it real?

Overall it probably took about 6 months to complete but it was spread over one year time period.

How did you build it?

The two boxes are made of wood with plexi-glass panels that were laser cut. The LED lights holders were 3D printed and laser cut. All the rest are quite standard components: potentiometers, buttons, switches, light sensors, rgb leds (neopixels) and a lot of soldering. I used Raspberry Pi and two Arduino boards. I needed to add some multiplexers to expand analog inputs on the Arduino.