Shaders in Processing 2.0 – Part 2   9 comments

The new capability of loading user-provided GLSL shaders into Processing’s P2D and P3D renderers opens up the possibility of customizing all the rendering operations in Processing, as well as of creating interactive graphics that would be very hard or impossible to generate otherwise. For OpenGL web applications, WebGL supports (only) programmable pipelines through GLSL shaders, and this has motivated the creation of online repositories of shader effects that can be run directly from inside the web browsers, as long as they support WebGL. Sites like the GLSL sandbox or Shader Toy hold large collections of shader effects that can be edited and controlled interactively through the browser. This new post will explain how to integrate GLSL shaders from the GLSL sandbox and Shader Toy websites into a Processing sketch.
Update: With the release of Processing 2.0 final, some of the contents in this post are outdated, please check this tutorial for a detailed description of the finalized shader API.

Post-processing filters

The 2.0a7 release includes several shader examples under the OpenGL/Shaders category. A typical application of shaders is the use of image-postprocessing filters, such as blur, edge detection, emboss, etc. The example OpenGL/Shaders/EdgeDetect shows how to apply a simple edge detection filter on an image. The sketch code is fairly simple: a PShader object is created in setup() from loading the fragment shader file containing the filter implementation. The shader type is TEXTURED, since this shader will be applied to the rendering of textured geometry (see the previous post about the shader types). Also note that no vertex shader is specified, since we don’t need to modify the default vertex stage of our shader. The new shader is set with the shader() function, and can be disabled anytime by calling the resetShader(type) function. Since all the draw() function contains is an image() call (which is just a wrapper for beginShape(QUADS)/endShape()) the only geometry that this sketch is sending down to the GPU is a simple textured rectangle, which will be then handled by the TEXTURED shader we loaded in the setup of the sketch:

PImage img;
PShader edges;  
boolean customShader;
  
void setup() {
  size(400, 400, P2D);
  img = loadImage("berlin-1.jpg");
    
  edges = loadShader(PShader.TEXTURED, "edges.glsl");
  shader(edges);
  customShader = true;
}

public void draw() {
  image(img, 0, 0, width, height);
}
  
public void mousePressed() {
  if (customShader) {
    resetShader(PShader.TEXTURED);
    customShader = false;
  } else {
    shader(edges);
    customShader = true;
  }
}

In some other situations, the entire rendering output of the sketch needs to be post-processed by a shader. Consider the OpenGL/Shaders/FishEye example. The shader in this sketch applies the inverse of the angular fish-eye transformation, so that when its output is projected onto a spherical surface, such as a planetarium dome, it will look undistorted. In this case, the fish-eye shader is also a post-processing filter that operates on textures. So one way to get a TEXTURED shader like this one to be applied on the sketch output is to do the rendering into an offscreen PGraphics surface, which will store the output as a texture that we can then pass through the shader, as we did for the PImage in the edge detection filter before:

PGraphics canvas;
PShader fisheye;
PImage img;

void setup() {
  size(400, 400, P3D);  
  canvas = createGraphics(400, 400, P3D);

  fisheye = loadShader(PShader.TEXTURED, "FishEye.glsl");
  fisheye.set("aperture", 180.0);
  shader(fisheye);
}

void draw() {
  canvas.beginDraw();
  canvas.background(0);
  canvas.stroke(255, 0, 0);
  for (int i = 0; i < width; i += 10) {
    canvas.line(i, 0, i, height);
  }
  for (int i = 0; i < height; i += 10) {
    canvas.line(0, i, width, i);
  }
  canvas.lights();
  canvas.noStroke();
  canvas.translate(mouseX, mouseY, 100);
  canvas.rotateX(frameCount * 0.01f);
  canvas.rotateY(frameCount * 0.01f);  
  canvas.box(50);  
  canvas.endDraw(); 
  
  image(canvas, 0, 0, width, height);
}

This method involves doing the rendering into an offscreen PGraphics object. This is in fact the most efficient way to apply post-processing filters on the sketch output, but Processing also includes a filter(PShader flt) function that accepts a PShader of type TEXTURED and automatically applies it on everything that has been drawn up to that point, without the need of creating an extra PGraphics object or changing the default TEXTURED shader. This method might be a bit slower that the first approach since, under the scenes, grabs the contents of the screen, copies it into a temporary PGraphics object, etc, but its advantage is the simplicity of the resulting code:

PShader blur;

void setup() {
  size(400, 400, P2D);
  blur = loadShader("blur.glsl"); 
}

void draw() {
  rect(mouseX, mouseY, 50, 50);    
  filter(blur);    
}

Note that loadShader(filename) is equivalent to loadShader(TEXTURED, filename).

Using shaders from GLSL sandbox and Shader Toy

GLSL sandbox from Mr.doob and others, and Inigo Quilez‘s Shader Toy are both web-based interfaces to edit and run GLSL shaders directly on browsers supporting WebGL. Most of these effects can be run in a Processing sketch with minor modifications to the GLSL shader code.

Let’s start with the Deform effect from Shader Toy. It applies a dynamic transformation on the uv coordinates of the input texture in order to achieve a classic “demoscene” look. The sketch code to run this effect in Processing is very similar to the one from the edge detection example discussed before, but with a couple of additions. First, the input texture needs to repeat itself when the uv coordinates run outside the [0, 1] range, and this is achieved, for the moment, by calling textureWrap(Texture.REPEAT) from the main renderer object before loading the texture. Since this function is not part of the PApplet API, it needs the additional casting on the g object. Secondly, the shader uses a few uniforms variables to control the parameters of the visualization: the resolution of the output screen, the mouse position (which makes the effect interactive), and the time expressed in seconds.

PImage tex;
PShader deform;

void setup() {
  size(512, 384, P2D);
  
  ((PGraphics2D)g).textureWrap(Texture.REPEAT);   
  tex = loadImage("tex1.jpg");
 
  deform = loadShader("deform.glsl");
  deform.set("resolution", float(width), float(height));
  shader(deform);
}

void draw() {
  deform.set("time", millis() / 1000.0);
  deform.set("mouse", float(mouseX), float(mouseY));
  
  image(tex, 0, 0, width, height);
}

The fragment shader needs some minor modifications though. The texture sampler is called tex0 in the original GLSL code from Shader Toy, but as it was mentioned in the Part 1 of these series of posts, Processing expects the shaders used as TEXTURED type to have a sampler called textureSampler. So the modified fragment shader should look as follows:

uniform sampler2D textureSampler;

uniform float time;
uniform vec2 resolution;
uniform vec2 mouse;

void main(void) {
  vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
  vec2 m = -1.0 + 2.0 * mouse.xy / resolution.xy;

  float a1 = atan(p.y - m.y, p.x - m.x);
  float r1 = sqrt(dot(p - m, p - m));
  float a2 = atan(p.y + m.y, p.x + m.x);
  float r2 = sqrt(dot(p + m, p + m));

  vec2 uv;
  uv.x = 0.2 * time + (r1 - r2) * 0.25;
  uv.y = sin(2.0 * (a1 - a2));

  float w = r1 * r2 * 0.8;
  vec3 col = texture2D(textureSampler, 0.5 - 0.495 * uv).xyz;

  gl_FragColor = vec4(col / (0.1 + w), 1.0);
}

Another effect from Shader Toy is Monjori. This one is entirely procedural and doesn’t use any texture as input. It basically goes through each pixel in the screen and does some clever math to create an psychedelic tunnel-like structure. Even though it doesn’t require any input vertex data to operate on, is still needs a stream of pixels covering the screen so the fragment shader is executed once for each pixel. This stream can ge triggered just by drawing a rectangle from (0,0) to (width, height):

PShader monjori;

void setup() {
  size(512, 384, P2D);
 
  monjori = loadShader(PShader.FLAT, "monjori.glsl");
  monjori.set("resolution", float(width), float(height));
  shader(monjori); 
}

void draw() {
  monjori.set("time", millis() / 1000.0);
  
  noStroke();
  fill(0);
  rect(0, 0, width, height);  
}

Since the rectangle is a unlit, non-textured piece of geometry, the shader type to specify in this case is FLAT. The GLSL shader code can be used without any modifications from the original in Shader Toy.

Many effects from the GLSL sandbox also work in this way, using a well-know technique in the demoscene called ray marching distance fields (presentation, tutorial, list of distance functions), for example this one. The fragment shader code can be used unmodified in the Processing sketch since it doesn’t define any uniform that is part of the interface with the renderer. The shader expects a few additional uniform variables to control the animation and interaction:

PShader landscape;

void setup() {
  size(320, 240, P2D);
 
  landscape = loadShader(PShader.FLAT, "landscape.glsl");
  landscape.set("resolution", float(width), float(height));
  shader(landscape); 
}

void draw() {
  landscape.set("time", millis() / 1000.0);
  landscape.set("mouse", float(mouseX), float(mouseY));
  
  noStroke();
  fill(0);
  rect(0, 0, width, height);  
}

The renderer in this case can be either P2D or P3D. Since the entire geometry is generated by the fragment shader, including camera movements and lights, the camera and any other transformations handled by the renderer in Processing are ignored. The only required element for this kind of effects to work is to draw a quad covering the entire output area.

Posted August 3, 2012 by ac in Programming

Tagged with , , , , , ,

9 responses to “Shaders in Processing 2.0 – Part 2

Subscribe to comments with RSS.

  1. Is there a suggested shader editor for OSX?

    Thanks for all this!

    • Not really, there have been several shader editor projects over the past few years, but all I know of have gone unsupported/abandoned… FXComposer from NVidia is still there, but it is Windows-only. One option is just to use XCode, which has GLSL syntax highlighting. I’ve also seen this Shader Studio app, but apparently only runs on iOS (I haven’t tried it myself). After doing some search, I found a couple of references to this Kick.JS shader editor.

  2. In the latest version of processing (v 2.0b3, I continuously get errors when using the PShader.FLAT constant as in the line:

    monjori = loadShader(PShader.FLAT, “monjori.glsl”);

    Also, when I remove it and run the program, I get the following error:

    Your shader cannot be used to render colored geometry, using default shader instead.

    My shaders that worked great in 1.5.1 seem to run terribly in 2.0b3. I have a feeling that it is trying to run the ‘default shader’ on top of the loadShader…

    • Hello, there were a few changes in the shader API between the alphas on which these posts were based on, and the beta. I will now write a brief post describing the differences.

      Most importantly, the PShader.FLAT, PShader.TEXTURED constants are not needed any longer, Processing will try to determine the type of shader by analyzing the code. Take a look at the shader examples included in 2.0b3 to see how the PShader objects work in the beta.

  3. Hi Andres, Very nice work on GLSL shaders integration.
    I’m trying to play around with it now on Processing 2.0b7 but I have a few questions: When I try to set variables of the shaders within processing it does not seem to change anything. To be more precise, on the landscape example (where you set “time”, “resolution” and “mouse” var in processing) it does not seem to react at the mouse at all and if I comment those lines the shaders works exactly the same, any idea why?

    • Hello, what happens is that in 2.0b7 Processing automatically sets the values of the uniforms resolution, mouse and time, so you don’t need to do it explicitly from the sketch. If you do, your values will be overwritten by Processing. You can always add your own mouse, resolution and time uniforms, just call give them different names from “mouse”, “resolution”, etc.
      However, don’t take this as written in stone. We are still discussing the details of the shader API with the rest of the processing team, so this behavior/naming conventions might change in the upcoming releases.

  4. Hello, your tutorials helped me a lot until know but I have no idea how to solve following problem:
    The Goal is to create a fragment shader which fades out the whole window to a specific color with a specific spead for each color channel.
    (see: http://wiki.processing.org/w/Fading_the_screen_to_black/any_color)
    The problem with above source is just that the time to calculate the values is way to high growing exponentially to the windwo size.
    Now I have used following simple processing code (2.0b6):

    PShader testShader;

    void setup() {
    size(640, 360, OPENGL);
    colorMode(HSB);
    testShader = loadShader(“blur.glsl”);
    testShader.set(“sp”, 0.005, 0.005, 0.005); //the speed for each color channel excluding the alpha channel
    testShader.set(“goal”, 0, 0, 0); //the color which we want to reach with the fade process
    shader(testShader);
    }

    void draw() {
    image(get(), 0, 0); //redraw current image for pushing everything trough the shader
    shader(testShader);
    }

    void mousePressed(){
    shader(testShader); //don’t know if I have to call it here…
    background(0);//draw something
    noStroke();
    fill(random(255), 255, 255);
    rect(10, 10, width – 20, height – 20);
    }

    And now the shader code:
    #ifdef GL_ES
    precision highp float;
    precision highp int;
    #endif
    //using highp infront of sampler2D results in an error
    uniform sampler2D textureSampler;
    uniform highp vec2 texcoordOffset;

    uniform highp vec3 sp;
    uniform highp vec3 goal;

    varying highp vec4 vertColor;
    varying highp vec4 vertTexcoord;

    void main(void) {
    vec3 col=texture2D(textureSampler, vertTexcoord.st).rgb;
    gl_FragColor = vec4(col+((goal-col)*sp), 1.0);
    }

    At the beginning it fades out perfectly but it gets stuck somewhere near the specified goal with low values for sp.
    The problem is also described in the snippet mentioned above.
    Now for me it seems to be a problem with the precision of the color values saved in the sampler2D but I have no idea how to increase it’s precision.

    Do you have any idea how to get it working properly?

    • I think the problem is due to the fact that even color values are handled as floats inside the shader, then they are clamped to the RGBA type where each component is in fact an unsigned byte (with 255 possible values). So, if the change in color is too small and is not registered in the 1/255 step, then it won’t result in a visible difference in color. One way to get around this is to use an auxiliar float variable on the CPU side to accumulate the exponential decrease, and pass it as an uniform to the shader. Something like this:


      PShader fadeout;
      float f;

      void setup() {
      size(400, 400, P2D);
      fadeout = loadShader("fadeout.glsl");
      f = 1;
      }

      void draw() {
      background(0);
      ellipse(width/2, height/2, 200, 200);

      f *= 0.995;
      fadeout.set("factor", f);
      filter(fadeout);
      }


      uniform sampler2D textureSampler;
      uniform float factor;
      varying vec4 vertColor;
      varying vec4 vertTexcoord;
      void main(void) {
      vec3 col = texture2D(textureSampler, vertTexcoord.st).rgb * factor;
      gl_FragColor = vec4(col, 1.0);
      }

      Notice that I’m using filter(fadeout) instead of image(get(), 0, 0) followed by shader(fadeout). Both approaches are basically identical, but using filter(PShader) is more efficient. This example fades to black, but you can easily adapted to fade into a chosen color. I hope this helps.

Leave a comment