Andiamo reference manual – Part 3   Leave a comment

The composition in Andiamo consists in a stack of layers, with each layer drawn on top of the previous one.  The initial configuration of layers is specified in a xml file (layers.xml) stored in the data folder of the sketch. At this point (version 021 of Andiamo) there is a certain flexibility in the layer arrangement, since filter layers can be added or removed while Andiamo is running.

The format of the layer configuration file is as follows:

    <video tracked="yes">layers/camera.xml</video>

where the type of layer is specified by the name of each xml tag: video, drawing, osc and text.

Andiamo’s video layer

The video layer is basically a two channel video mixer, which can show movies, live video captured with a camera or the output of any gstreamer pipeline. A video layer can also be used store new clips rendered in real-time during the execution of Andiamo, which will be available as any other video file. The configuration of a specific video layer is contained in the file whose file name is given as the content of the video tag in the layers.xml file. For instance, in the sample layers.xml given above, the configuration of the first video layer is stored in data/layers/video.xml:

    <recording name="liverec" fps="0.8" resolution="320x240" layer="final" codec="theora" quality="medium"></recording>
    <camera>Sony Visual Communication Camera VGP-VCC7</camera>
    <pipeline>ksvideosrc ! decodebin ! ffmpegcolorspace ! video/x-raw-rgb, bpp=32, depth=24</pipeline>

This sample shows all the tags recognized in by the video layer (with the exception of the <loop> tag which will be discussed in the next part of the manual in the context of the drawing layer):

  • <movie> tag is used to specify a video file (is is assumed that the given path will be contained in the sketch data folder).
  • <camera> tag is used to specify a video capture device available on the system.
  • <pipeline> tag  allows to enter an entire gstreamer pipeline
  • <recording> tag contains the parameters for real-time rendering movie file mode

All the different elements (movie, camera and pipeline) are loaded during initialization and shown in the video layer top interface:


The “media library” strip allows to navigate the different video sources available and select specific ones to use in the two channels of the layer. If the number of items exceeds the width of the menu, then the list can be scrolled left or right by dragging the pen (just pressing it against the tablet, or dragging with the left button pressed when using a regular mouse). The selected video sources are highlighted with a red border. Also, there is a mixer on the left edge of the menu, where the numbers 1 and 2 are drawn on top of a gray gradient. Dragging up or down the pen or mouse in that area will determine the amount of mixing between the two channels.

The bottom menu in the video layer contains the timelines  and play/pause buttons for both channels. It also contains a mixer, which works in the same ways as the mixer in the top menu. The mixing also affects the volume of the videos (if they contain audio at all), so that channel 2 is muted when the only channel 1 is visible, volume is 50% for each channel  when the mixer is right at the middle, and channel 1 is muted then the mixer is all the way down:


Real-time recording into a video layer it is possible when the <recording> tag has been specified in the configuration file for that layer. Recording starts and ends by hitting the ENTER key, and this will generate a new entry in the media library for the newly created video file.

<recording name="liverec" fps="0.8" resolution="320x240" layer="final" codec="theora" quality="medium"></recording>

The parameters in the <recording> tag control the resolution, codec and quality of the resulting file, among other things. In the example above the parameters are:

  • name: the prefix given to the filename of the video file, in this case “liverec”. All the recorded video files are saved into data/videos
  • fps: this factor is used to compute the target frame rate of the recorded file, depending on the current frame rate of Andiamo. For instance, if Andiamo is running at 40 fps, a value of 0.8 means that the fps of the recorded file will be 40 * 0.8 = 32.
  • resolution: is the width and height of the recorded file.
  • layer: indicates which layer in the stack will be saved into the recording. The index is zero based, so this means that “0” represents the first layer, “1” the second, and so on. If the goal is to record the entire composition then the “final” should be used.
  • codec: the video codec for the recorded file. Theora, x264 and xvid are available, but only theora is functional at this point
  • quality: the quality of the video file, which can take the following values: WORST LOW, MEDIUM, HIGH or BEST.

I have noticed that some pipelines could fail to restart playing after they are paused, so I added a parameter the “continuous” parameter which could be used with any video source (movie, camera or pipeline):

<pipeline continuous="yes">taa_dsvideosrc ! decodebin ! ffmpegcolorspace ! video/x-raw-rgb, bpp=32, depth=24</pipeline>

This parameter makes the video source to play continuously, irrespective of the status of the play/pause button.


Posted June 3, 2009 by ac in Programming

Tagged with

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: