Together with my colleague Michael Raschke,
we extended the Robopix-painting-robot in order to be able to paint pictures based on a participant’s thoughts.
The Brain Computer Interface
We were using the cognitive and affective suite of an Emotiv EPOC brain-computer interface. The affective suite can read a users emotional state, e.g. the excitement, engagement, meditation, and frustration. These values can be read without a user-specific training, because they are detected similarly for each user. On the contrary, there is the cognitive suite, which can recognize previously trained thoughts. Since every person’s brain is folded differently, the data read from the brain differs from each other. This is why every participant has to train actions first before the system is able to recognize them. Also, each participant has to record a neutral state first in order to destinguish the thoughts to be recognized from the ‘noise’. The Emotiv EPOC does the signal processing in order to recognize previously trained actions and their intensity. Finally, we wrote a software to train the cognitive suite outside the Emotiv demo-application and to transfer the data into any other piece of software running on a machine located in the same LAN as the hosting machine. To achieve that, we used the EIToolkit of Paul Holleis. The EIToolkit encodes the data and creates UDP-Packages, which wrap the data coming from the Emotiv EPOC and enable to unpack them on any machine in the same network.
How the thoughts lead to a painting
Now, that the programm has access to the values coming from the Emotiv EPOC, our algorithm takes the strength of the cognitive signal and the (short-term) excitement of the participant. The robot has 4 degrees of freedom, which can be controlled through our software. We mapped the cognitive signal to those 4 degrees of freedom. The robot is moving faster according to the strength of the cognitive signal. The excitement is also taken into the equation for calculating the movement of the robot. When the participant is excited, the robot changes the angle of the arm faster. This leads to more scattered lines in the resulting painting. In our prototype the paint has to be put into the robot manually.
Demonstration of the robot at the Augmented Human 2013
We presented our system during the demo session at the Augmented Human 2013 conference in Stuttgart. Overall 8 participants had the chance to paint with their thoughts. The pictures included in this post were taken during the demo. They show the robot during the painting-process and the result after the painting was finished.
We conducted our demo as follows: First, the participant was equipped with the Emotiv EPOC. We made sure that every sensor had a very good connectivity to the skull. Then, we taught the system how the participant’s thoughts look like when they are in a neutral state. Therefore, we asked them to relax, to act naturally, and not to think about anything in particular. The system is recording this neutral state in 2 chunks with a length of 8 seconds each. After that, we asked the participants to think about shaking the robot’s arm from left to right with their hands. We observed that the participants can trigger cognitive actions more accurate when telling them to think about moving a part of their body along with the robot (e.g. their arm or foot). Those actions were also trained 2 times with a length of 8 seconds each. Now that the robot can recognize the participant’s thoughts, we are ready to start the painting process. Therefore, we closed the robots glass cage to prevent the participants from getting painted. The participant now has to think about shaking the robot from left to right again in order to move the robot’s arm. As soon as the participant starts to move the arm, the paint is put into the robot’s arm, which distributes the paint on the canvas.
The “painting with thoughts” evolved from the “10 Sekunden Kunst“-project where up to 4 participants can control the robot’s arm using up to 4 smartphones. The data from the smartphone’s gyroscope is mapped onto the 4 axes of the robot’s arm. Because the robot is connected to the smartphone-app via Internet, the participants don’t have to be near the robot to interact with it. They can watch the creation of their painting via an Internet live stream.
Additional media:
- Youtube Teaser: http://www.youtube.com/watch?v=ym9oHXDWW1k
- Youtube: The very first painting: http://www.youtube.com/watch?v=8moBGXcUd7M
- Article about the project in the Stuttgarter Zeitung (German version): here