The release v20 of Andiamo, the software tool for live drawing and audiovisual performance that I have been developing during the last months at the Design|Media Arts department at UCLA, has reached a level of usability and stability that justifies the publication of a first user reference manual. This first manual will spread over the following entries in this blog. The first part below deals with general aspects of the software (concepts, architecture, etc).
Aims of the software
Andiamo was created as a tool for performing live animation through different techniques such as rotoscoping, live drawing and cel animation, using a graphics tablet as the main input device. The animations can be combined with and synchronized by video captured from a live camera or read from files stored on the computer disks. Andiamo allows for live mixing, looping and montage of multiple layers of video and animation material. Furthermore, the resulting compositions can be processed in real-time with FX image filters (motion blur, bloom, edge detection, cel shading, etc.). This filters are accelerated by the video card’s graphics processing unit (GPU) in order to achieve the framerates required for live performance.
Andiamo doesn’t try to be a general tool for live audiovisual performance (like Modul8, Arakos or MAX/Jitter), but its main goal is rather to focus in the close integration of three basic elements mentioned above: animation, video and filters. It has a certain degree of modularity and extensibility though, mainly by allowing to incorporate new drawing modes (inherited from the built-in gesture classes), and new image filters using the OpenGL Shading Language (GLSL). In order to control things outside the scope of Andiamo (sound for example), an Open Sound Control (OSC) module is available for inter-application communication.
Video and animation can be tightly combined by the use of “anchor points”. These points are just two dimensional elements to which hand-drawn gestures can be attached to, so that motions in the anchors translate to motions in the drawings. In particular, Andiamo includes a GPU-accelerated point tracker (KLT-GPU), which follows “features” on a video source. The tracked features are then mapped onto the anchor points, which makes possible to have gestures responding to motions in the video (either from a live camera or a file).
Another goal of Andiamo is to provide an open platform for experimenting with different animation, drawing and video processing techniques and algorithms in the context of live performance. This is the reason why Andiamo is released as Open Source using the Artistic License, as well as being based on standardized libraries (OpenGL, OSC), Open Source when possible (GStremaer).
The basic building block in Andiamo is the layer. A layer is an independent 2D surface that can be drawn to the screen and contains a number of dynamic graphic elements. This elements can be video, drawing, FX filters, text, images or shapes. Layers are combined sequentially in a composition pipeline, which is rendered to generate the final visual output. The composition pipeline is entirely dynamic, meaning that layers can be added or removed during runtime. Every layer has some parameters that are common to all layer types, such as transparency and tint color, which are used to blend together all the layers in the composition.
Andiamo has a custom graphical user interface that follows three principles: minimality, dynamism and context-awareness. In the context of live performance, the responsiveness of the software tool needs to be maximized and the cluttering of the interface elements minimized, while keeping a logical workflow that eases the live operation. These are the justifications to build an interface that is minimal in its visual appearance and responds dynamically to the user: when the focus of the input moves to the live drawing area, the interface elements hide automatically in order to save space and reduce the visual clutter on the screen. Each layer type has its own unique interface (menus, buttons, etc) which are updated accordingly when the user moves between the different layers in the composition.