GLGraphics 0.9.2: better integration with Processing camera (and more)   13 comments

This new release of GLGraphics (0.9.2, Update: use this other package if you need a Java5-compatible version) includes a couple of exciting new features/improvements. First of all, code using GLModels (the class that stores 3D models directly in the GPU memory for fast rendering) can be safely mixed with the default Processing methods for camera and viewport handling. Secondly, the API of the GLModel class has been expanded with many utility methods to load data into a model (this includes the possibility of loading an entire model from an xml file). And finally, a new class called GLModelEffect encapsulates shaders that are applied during the rendering of a GLModel. This allows for effects like bump mapping, toon shading, fur rendering, etc.

bumpmapscreen

Using a bump mapping example from Aaron Koblin as the starting point, I encapsulated all the shader functionality needed to implement these type of effects in the GLModelEffect class. This class, combined with the improved compatibility between the GLGraphics renderer and the built-in camera transformation routines in Processing, allows to have an effect such as bump mapping without any explicit reference to OpenGL in the sketch:

import processing.opengl.*;
import codeanticode.glgraphics.*;

GLModel cube;
GLModelEffect bump;

float angle;

void setup() {
  size(640, 480, GLConstants.GLGRAPHICS);

  cube = new GLModel(this, “cube.xml”);
  bump = new GLModelEffect(this, “bump.xml”);
}

void draw() {
  float mx = (mouseX – width / 2.0) / (width / 2.0);
  float my = (mouseY – height / 2.0) / (height / 2.0);

  angle += 0.003;

  background(50, 0, 0);

  // When drawing GLModels, the drawing calls need to be encapsulated
  // between beginGL()/endGL() to ensure that the camera configuration
  // is properly set.
  GLGraphics renderer = (GLGraphics)g;
  renderer.beginGL();

  // Once inside beginGL()/endGL(), lights can be used…
  lights();
  ambient(250, 250, 250);

  // …as well as camera transformations.
  camera(400 * sin(mx), 0, -400 * cos(mx), 0, 0, 0, 0, 1, 0);
  perspective(PI / 3.0, 4.0 / 3.0, 1, 1000);

  pointLight(255, 255, 255, 0, 0, -100);

  pushMatrix();
  rotateY(angle);

  // A model effect can be applied in different ways:
  // effect.apply(), model.render(effect), GLGraphics.model(model, effect):
  //bump.apply(cube);
  //cube.render(bump);
  renderer.model(cube, bump);
  popMatrix();

  renderer.endGL();
}

Of course that a lot of nasty details are hidden inside the GLModelEffect class, and creating the xml file for a model effect requires good understanding of GLSL, vertex attributes and such. I don’t have enough time now to write a tutorial on how to use this GLModelEffect class, hopefully the BumpMap example included with the library will provide some more information to those interested.

The GLModel class has new methods to load data into a model, for instance array lists can be used to load vertices, normals, colors and texture coordinates:

ArrayList vertices;
ArrayList texCoords;
ArrayList normals;

earth = new GLModel(this, vertices.size(), TRIANGLE_STRIP, GLModel.STATIC);

// Sets the coordinates.
earth.updateVertices(vertices);

// Sets the texture map.
tex = new GLTexture(this, globeMapName);
earth.initTexures(1);
earth.setTexture(0, tex);
earth.updateTexCoords(0, texCoords);

// Sets the normals.
earth.initNormals();
earth.updateNormals(normals);

An xml file holding the entire geometry can be passed to the constructor of a GLModel object. This is also exemplified in the BumpMap example, where the vertices, textures, texture coordinates, normals, colors and vertex attributes are set inside the xml.

Advertisements

Posted October 26, 2009 by ac in Programming

Tagged with , , , , , , ,

13 responses to “GLGraphics 0.9.2: better integration with Processing camera (and more)

Subscribe to comments with RSS.

  1. Hi Andres,

    is there a way to render the “Render to model” example as a displaced surface of quads or patches? Or would it be easier to create a texture grid and displace that along the z axis?

    Any help would be greatly appreciated.

    Miha

  2. You can render the vertices of the model as quads if you do:

    destModel = new GLModel(this, numPoints, QUADS, GLModel.STREAM);

    Other rendering modes you can use, besides QUADS and POINTS are the following:
    LINE_STRIP
    LINE_LOOP
    LINES
    TRIANGLE_STRIP
    TRIANGLE_FAN
    TRIANGLES
    QUAD_STRIP
    POLYGON

  3. I tried all of them, but they don’t produce a continuous surface. Im looking trying for something like:
    http://www.ozone3d.net/tutorials/vertex_displacement_mapping_p03.php

  4. Yes, I see the problem: the vertices of the model are rendered in the same order as the pixels in the texture. This ordering is not the same as the one required to draw a continuous surface of quads… The solution to this is not so immediate.

  5. Yup. I noticed :D

  6. Very nice library.

    It seems though, screenX() etc do not return correct values in offscreen mode.
    GLGraphicsOffscreen directly transforms on GL, but PGraphicsOpenGL does not implement these so the ones from PGraphics3D are used. But screenX() is not implemented at all in GLGraphicsOffscreen, meaning it transforms directly and does not use the internal PMatrix3D screenX() is relying on to calculate the transformed x position of the current screen one.

    Any idea how to solve this?

    • I see… However, I think this problem happens only if you do something like this:

      offscreen.beginDraw();
      float sx = screenX(200, 100, 300);
      offscreen.endDraw();

      because when you call screenX() directly, you are ultimately invoking screenX() from the main renderer. But if you call screenX() from the offscreen renderer, then the result should be different:

      offscreen.beginDraw();
      float sx = offscreen.screenX(200, 100, 300);
      offscreen.endDraw();

      Does this make sense?

  7. Mmh, maybe I misunderstood it, but I just rechecked I used the GLGraphicsOffscreen methods (and not the one from the main renderer).

    Also, my explained assumption seems to indicate the transformation are done directly in OpenGL / on the graphic processor, while Processing’s screenX() implementation rely on the internal transformation matrices.

    • A tiny Processing example to show what I am talking about. Using GLGraphicsOffScreen it results in weird numbers, using PGraphics it is correct.

      //PGraphics pg = this.g;
      GLGraphicsOffScreen pg = new GLGraphicsOffScreen(this, 200, 200);

      println(“pre=” + pg.screenX(100, 100));
      pg.pushMatrix();
      pg.scale(2);
      println(“screenX=” + pg.screenX(100, 100));
      pg.popMatrix();
      println(“post=” + pg.screenX(100, 100));

      • Bracket the pg calls between beginDraw()/endDraw():

        pg.beginDraw();
        println(“pre=” + pg.screenX(100, 100));
        pg.pushMatrix();
        pg.scale(2);
        println(“screenX=” + pg.screenX(100, 100));
        pg.popMatrix();
        println(“post=” + pg.screenX(100, 100));
        pg.endDraw();

        (i know, even though you are not doing any proper drawing calls here, beginDraw() is needed because sets up a number of things for the correct function of the offscreen renderer).

  8. Appreciate your support! Hope you stay with me…

    And yep, beginDraw and endDraw is needed, sorry I missed that in my test app. (Though in my real program I was using it.) Now, the numbers look better, but the inner “screenX=” still returns a wrong, i.e. non-scaled value.

    Result should be 100, 200, 100. But with the current example it’s 100, 100, 100.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: