Shaders in Processing 2.0 – Part 3   Leave a comment

This is the last part of a series of posts about the new shader architecture in Processing 2.0. This post focuses on how to integrate low-level OpenGL calls with the standard Processing API. This integration has been possible since very early releases of the 1.0 branch, and allowed users through the use of OpenGL functions to implement advanced rendering functionality not available in Processing. The main drawback of the GL integration in Processing 1.x is that it makes the sketches incompatible with regular Processing code (other 3D renderers for example), and harder to understand by many users. Although the latter will continue to be problem as long as OpenGL calls are explicitly included in Processing sketches, the compatibility issue is addressed by Processing 2.0 now that OpenGL is much more deeply integrated with the P2D and P3D renderers.
Update: With the release of Processing 2.0 final, some of the contents in this post are outdated, please check this tutorial for a detailed description of the finalized shader API.

In Processing 2.0, a subset of the OpenGL API is available through an object of type PGL. This subset consists in, at the time of this writing, the functions from the OpenGL ES 2.0 spec that are used internally by the P2D and P3D renderers, plus a few additional functions, like glBlitFramebuffer and glReadBuffer, that are part of the OpenGL 2 API on the desktop. The idea at this point is to progressively expand the OpenGL API exposed by PGL, at least until the entire GL ES 2.0 API is covered.

As it is the case in Processing 1.x, the OpenGL calls in 2.0 should happen between beginGL() and endGL(). beginGL() returns the PGL object that then can be used to access the OpenGL functions:

void draw() {
  pg = (PGraphicsOpenGL)g;
  PGL pgl = pg.beginGL();
  pgl.glClear(PGL.GL_COLOR_BUFFER_BIT);
  ...
  pgl.endGL();
}

The GL constants, such as GL_COLOR_BUFFER_BIT in this example, are static fields of the PGL class, so the are accessed in a static way by referencing the class name, as it is shown above.

As it was mentioned in the two previous posts, Processing 2.0 uses GLSL shaders to handle all rendering operations. Although we can use the GL API exposed by PGL to create, compile and bind our own shaders if we need to, in many situations the default shaders included with Processing are enough. If this is the case, we can just retrieve the PShader object we need and use it to render our custom geometry. Since there are several types of shaders (FLAT, LIT, TEXTURED, FULL, LINE and POINT), as explained in the first post, we need to indicate what type we want to retrieve from the renderer in order to use ourselves. This is done with the getShader(type) function. Once we get a hold on the shader we want, we can bind it and then draw our geometry through the low-level GL calls:

void draw() {
  PShader flatShader = getShader(PShader.FLAT);
  pg = (PGraphicsOpenGL)g;
  PGL pgl = pg.beginGL();
  flatShader.bind();
  // render flat (unlit, untextured) geometry only
  ...
  flatShader.unbind();
  pgl.endGL();
}

The PShader object has a public field called glProgram which is the ID that OpenGL uses to identify the shader program encapsulated by the object. This ID is needed to retrieve information from the shader, such as the location of its uniform and attribute variables.

Given the higher level of integration between OpenGL and the new P2D and P3D renderers, information like the geometry transformations and lighting setup applied in the drawing before calling beginGL will be passed to the shader and used for rendering unless they are explicitly overridden by the user. The example sketch OpenGL/Shaders/LowLevelGL shows how to use the GL integration in Processing 2.0 to draw a simple primitive:

import java.nio.FloatBuffer;

PGraphicsOpenGL pg;
PGL pgl;

PShader flatShader;

int vertLoc;
int colorLoc;

float[] vertices;
float[] colors;

FloatBuffer vertData;
FloatBuffer colorData;

void setup() {
  size(400, 400, P3D);

  pg = (PGraphicsOpenGL)g;

  // Get the default shader that Processing uses to
  // render flat geometry (w/out textures and lights).
  flatShader = getShader(PShader.FLAT);

  vertices = new float[12];
  vertData = PGL.allocateDirectFloatBuffer(12);

  colors = new float[12];
  colorData = PGL.allocateDirectFloatBuffer(12);
}

void draw() {
  background(0);

  // The geometric transformations will be automatically passed 
  // to the shader.
  rotate(frameCount * 0.01f, width, height, 0);

  // Update the geometry hold by the buffer objects
  ...  

  pgl = pg.beginGL(); 
  flatShader.bind();

  vertLoc = pgl.glGetAttribLocation(flatShader.glProgram, "inVertex");
  colorLoc = pgl.glGetAttribLocation(flatShader.glProgram, "inColor");

  pgl.glEnableVertexAttribArray(vertLoc);
  pgl.glEnableVertexAttribArray(colorLoc);

  pgl.glVertexAttribPointer(vertLoc, 4, PGL.GL_FLOAT, false, 0, vertData);
  pgl.glVertexAttribPointer(colorLoc, 4, PGL.GL_FLOAT, false, 0, colorData);

  pgl.glDrawArrays(PGL.GL_TRIANGLES, 0, 3);

  pgl.glDisableVertexAttribArray(vertLoc);
  pgl.glDisableVertexAttribArray(colorLoc);

  flatShader.unbind();  
  pg.endGL();
}

allocateDirectFloatBuffer is just a convenience function in PGL to create the direct buffers needed to pass vertex data to the shader.

Posted August 3, 2012 by ac in Programming

Tagged with , , , ,

Leave a comment