GLTexture library for Processing   13 comments

Here is a new version of the opengl texture library for processing (v0.6.5). In fact, I renamed it to gltexture, and also changed the prefix P* to GL* to avoid conflicts with the core classes of processing. I also fixed a minor bug and added an initial version of the documentation (generated from the code with javadoc).

Click on the links below to download the library and source code and to see the documentation (also included in the library download):




As mentioned in previous posts, the idea of this library is to integrate opengl textures and glsl shaders into processing as seamlessly as possible. For this reason the GLTexture object is a descendant of PImage, so all the basic image manipulation funcionality is still available in the new object. GLTexture encapsulates an opengl texture and you can think of it in the same way as the pixels property of the PImage: you copy the image to the texture by calling the loadTexture() method. Once loaded into the GPU, you can apply a GLTextureFilter to it, and do any other opengl-accelerated operation. Then, you copy the texture back to the pixels by calling updateTexture().

This example shows the basic things you can do with the new library:


Here is available as an online applet:

BasicUse applet

Note that in this applet the xml package is not imported (I copied xml.jar manually into the applet directory), but in general you have to explicitly import processing.xml into the main program in order to load the filters. This is actually the same issue that affects the candy library.

The filter object, GLTextureFilter, wraps a shader program that is used specifically to apply 2D filters to the texture objects. It also defines a 2D point grid which the source texture is mapped onto. This grid can be manipulated at the vertex stage of the shader program, in order to generate arbitrary transformations on the mesh.

Format of the xml files

The filters are initialized from an xml file that contains three things: the filenames of the vertex and fragment shaders, and the parameters of the grid (resolution and spacing). Below there is a sample xml configuration file for a filter:

<filter name=”pulsating emboss”>
<description>Emboss with pulsating grid</description>
<resolution nx=”10″ ny=”10″></resolution>

The three important tags are <vertex>, <fragment> and <grid>. In <vertex> and <fragment> you just specify the filename of the vertex and fragment glsl shaders. If no vertex shader is needed, just don’t add a <vertex> tag.

The <grid> tag is used to define a rectangular grid the input texture is applied onto. The advantage of having a grid instead of just a rectangle is that you can modify the vertices of the grid inside the vertex shader, allowing for interesting distortions of the texture. However, if there is no need for such type of distortions, you can just skip the <grid> tag.

With <resolution nx=”10″ ny=”10″></resolution> you just specify the number of points on the grid, along the x and y directions. In this case, the grid has 10×10 points.

So, if you don’t need to change the vertex stage nor a grid, the xml file would reduce to something like this:

<filter name=”gaussian blur”>
<description>3×3 Gaussian blur filter</description>

Uniform parameters in the shaders of the filter

The glsl shaders themselves have to follow certain conventions in order to be used inside a filter. These conventions are basically the way you have to name the uniform parameters inside the shaders, for the filter to be able of recognizing those uniforms and link them to the parameters that can be passed from processing.

The naming conventions are the following:

1) the name of the source/input texture units should be “src_tex_unit” + n, where n = 0, 1, 2, etc. I.e., the first texture unit should be named src_tex_unit0, the second src_tex_unit1, and so on.

2) The offset of the source textures is identified by “src_tex_offset” + n, with n = 0, 1, 2, etc. The offset is, in other words, the inverse of the width and height of the texture, and represents the step size to move from one texel to the next.

At least one src_tex_unit0 uniform should be defined in the fragment shader, since a filter needs at least one source texture to operate on.

Then you can define the following optional uniforms to pass general parameters into the shader:

3) timing_data (of type vec2): the first component of this vector is the frameCount and the second is millis() at the time the shader program is executed. This allows to pass timing information into the shader.

4) par_flt1, par_flt2 and par_flt3 (of type float): these float uniforms can be used to pass float numbers into the shader.

5) par_mat2 (of type mat2), to pass a 2×2 matrix into the shader.

6) par_mat3 (of type mat3), to pass a 3×3 matrix into the shader.

7) par_mat4 (of type mat4), to pass a 4×4 matrix into the shader.

These uniforms don’t need to be defined inside the shader, and also the shader can have other uniforms with different names. However, the advantage of having the uniforms listed here is that they are handled automatically inside the filter class. In order to set the value of these uniforms from processing, you use the GLTextureFilterParams object, inside which you have the parFlt1, parFlt2, parFlt3 float variables and parMat2, parMat3 and parMat4 float arrays. So, if the glsl shader that defines the filter has the float uniform par_flt1, you can set its value with the following code:

GLTextureFilterParams params = new GLTextureFilterParams();
params.parFlt1 = map(mouseX, 20, 640, 1, 30);
tex0.filter(pixelate, tex1, params);


Posted March 31, 2008 by ac in Programming

Tagged with , , , , , , ,

13 responses to “GLTexture library for Processing

Subscribe to comments with RSS.

  1. Pingback: realmatik: a little demo of realtime video filters « codeanticode

  2. Ah, just what I was looking for today!

    Your xml format and strict definitions of uniforms are why handling CgFX (and COLLADA-FX, whatever) is a bit problematic for Processing. FX formats tend to be highly-general and all over the map of their definition space, while specific applications like a Processing sketch, or Photoshop, or a game) usually want a very narrow set of predictable inputs for quick performance (e.g. in the game I’m doing now, we use HLSL-FX, but only within a narrow, optimized part of what HLSL allows). That “complexity barrier” between artist intent and fast GPU streaming has yet to be really well-breached

    • (GLTexture is now just included completely in GLGraphics, yes?)

    • Thanks for your take on this problem. On one hand I like some aspects of the xml format I came up for defining texture filters in glgraphics, but on the other hand I still think it would be better to use some standard format such as Collada-FX or CgFX. Maybe I could do in glgraphics what you are currently doing with HLSL-FX, i.e., to use only a subset of the functionality available in these meta formats. Another thing we need in order to make GPU effects more approachable for artists is a visual “shader” editor that could be added as a tool in the Processing interface.

      • I think the XML format is fine, though I’m glad I found this blog entry — I didn’t find any other doc on it, and it’s not directly-linked from the current-GLGraphics blog entry either (maybe it’s there.. I’m just dense that way at times).

        There have been many visual shader editors over the years. They too have (so far) only worked in narrow scopes, like the common Maya/Max “shader balls” which work well within those programs but don’t give you much in the way of freedom in composition of shader types that are not explicitly those kinds of 3D material surfaces.

      • Yes, documentation has always been a big problem with glgraphics. Many features in GLGraphics probably go unnoticed because of the lack of proper docs. The examples provided with the lib try to take care of this issue, but proper tutorials, starting guides, etc. would be very helpful.

      • I’m a bit stumped on how to SAVE the result of a filtered GLTexture to file… .save doesn’t work, .get() to PImage doesn’t work… even after .loadPixels()… I must be missing something?

      • You need to copy the texture to the pixels array of the image, which you do with the updateTexture() method (yes, I know, the name is not very intuitive). So in order to save a textured image you do:


      • perfect! Now I have a simple little paint program :)

      • Now, how do I set address (wrap) modes for tiling textures?

      • You mean the GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T parameters? These are hard coded to GL_CLAMP at this time. I could add an additional field in GLTextureParameters to allow for other wrapping modes.

      • Yes please! (Surprised I didn’t run into that one when checking Cg samples..)

        I will send you a copy of what I’ve been doing with GLTexture, kind of fun (for me anyway).

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: