I couple of weeks ago I started coding up in Processing a tool for live visual performance using real-time video effects, drawing, animation and computer vision techniques. The name of this tool is ANDIAMO, which is an Spanish acronym for ANimador DIgital Analógico MOdular (Digital-Analog Modular Animator).
From a technical point of view, one of the goals of ANDIAMO is to use the Processing libraries that I have developed so far (GLGraphics, GSVideo, GPUKLT, etc) to show the possibility of handling real-time video and GPU effects within Processing. I also intend to use it as a prototype platform to test some interface ideas in the context of live performance, and to explore ways to enhance visual improvisation using a graphics tablet as the main gestural input device.
I haven’t released any installation package yet, but the source code is available here.
After having used with live performance tools that are operated mainly through the keyboard, as well as with GUI-heavy programs like Modul8, I’m trying to find a direction orthogonal to these extremes. The concept is to have a custom graphic interface that it is invisible most of the time, so it doesn’t distract the live performance by occupying the screen with unneeded visual elements. The interface elements appear only when certain gestures are made to access specific options and parameters. This interface makes extensive use of the graphic tablet, and at this point it is not fully functional with a regular mouse. The use of the keyboard is minimized and only very frequent operations can be accessed through keyboard shortcuts. Below there are some screenshots of the current interface (version 012):
Live drawing layer
Shapes layer with shapes attached to points tracked on the video