Dome projection   5 comments

I have been interested in projection on spherical domes for a while, but never had the chance to experiment on an actual dome. This situation changed after I met Dave Pentecost a couple of years ago. Dave is an advocate and practitioner of the use of digital domes in education and art, and has been documenting his advances in the development of a low-cost dome authoring and projection system in this website. A dome system following those specifications is being installed at an amazing place in New York, the Lower Eastside Girls Club, a Center for Community for girls and young women on the Lower East Side. The Girls Club has been running since 1996, but recently moved to a brand new building that includes the 30 feet hemispherical planetarium, among many other facilities. A recent visit to the Girls Club’s planetarium allowed me to test the code I wrote earlier for dome projection (and realize that it was wrong), and discuss with Dave how we could use Processing and other software tools to allow people to easily create visual content for domes and to carry out artistic projects specifically tailored to the context of the planetarium. These tests and discussions lead to some recent technical developments that I will describe in more detail below.

First of all, I should note that the dome is now almost finished, and in fact the Girls Club will officially inaugurate the new building in the Fall 2013. From a post by Dave, this is how the planetarium looks like right now:

dome at the Girls Club

There are several tools to project 3D scenes, images and movies on domes (some lists here and here), and even full blown VJ applications for real time audiovisual performance. But what seems to be less common is simple approaches to enable artists/coders to try out ideas and concepts for hemispherical dome projection, without having to worry about the technical details of the projection. Fortunately, here we were lucky for two reasons:

  • The projection setup chosen by Dave for the Club’s planetarium is very straightforward: a single projector located at the geometric center of the dome. This setup greatly simplifies the underlying math by removing the complications that arise in multi-projector configurations, such as edge blending needed when the images from two projectors overlap, etc.
  • As I mentioned in the introduction, my first attempts at projection (using a fish-eye shader effect adapted from Paul Bourke’s article) proved to be erroneous upon testing in the Girls Club’s planetarium. But then we discovered Christopher Warnow‘s full-dome projection template project that uses cube mapping in Processing to properly implement dome projection in single-projector setups.

Cube mapping is a well known OpenGL technique that has been used to achieve effects like environmental mapping, where the scene surrounding an object is mapped onto the object in order to simulate mirror-like surfaces and specular highlights. The key element of this technique that makes useful in dome projection is that the cubemap texture of a 3D scene, mapped onto a sphere, exactly represents how the scene would look to a viewer located in the center of the sphere. This article about real-time dome projection summarizes this and other techniques. Paul Bourke’s materials on dome projection are also a very good reference, and describe more complex configurations (for example spherical mirrors).

One drawback of Christopher’s template project is that it uses Processing 1.5 (and GLGraphics!) and the fixed function pipeline in OpenGL. So the challenge was to adapt his code to Processing 2.0 and the new shader API that is now part of the P3D renderer.

Dome Projection example in Processing 2.0.3

A quick Internet search on “dome projection glsl” leads to several links to forum discussions, code snippets and articles. One of the best ones I found with a complete implementation of a cubemap sample program using the programmable pipeline is this one (full source posted here), from the author of the also excellent “Learning Modern 3D Graphics Programming” tutorial series. As it is usual with code based on OpenGL 3.0+, the programmer is in charge of dealing with all the matrix math and handling of vertex buffers, which can be quite tedious without a set of utilities to take care of this repetitive work. Fortunately, Processing 2.0 provides methods to load and configure shaders, in addition to handling matrix math and buffer operations transparently. However, not all the GL operations needed to render cubemap textures is exposed by the PApplet API. In order to access the low-level GL functions required for cubemapping, one can use the PGL class. PGL contains all the functions from the OpenGL ES 2.0 specification, which supports cubemaps. The reason for sticking to GLES2 was to ensure  compatibility between desktop and mobile platforms, however if one needs to access more advanced GL profiles then the underlying JOGL objects are also available, as described in an earlier post. The resulting sketch that ports the original full dome projection code from Christopher was included as a Shader example in the recent 2.0.3 release of Processing. This sketch is interesting because it shows how low-level PGL function calls can be combined seamlessly with normal Processing calls for setting camera and perspective, lights, geometric transformations, etc. For instance, the initialization of a cubemap requires loading a custom shader for cubemap rendering, building the geometry for the dome sphere, and setting the parameters of the cubemap texture. From these three steps, only the last one requires low-level PGL calls, while the first two can be implemented with standard Processing functions:

void initCubeMap() {
  domeSphere = createShape(SPHERE, height/2.0f);

  PGL pgl = beginPGL();

  envMapTextureID = IntBuffer.allocate(1);
  pgl.genTextures(1, envMapTextureID);
  pgl.bindTexture(PGL.TEXTURE_CUBE_MAP, envMapTextureID.get(0));
    pgl.texImage2D(i, 0, PGL.RGBA8, envMapSize, envMapSize, 0, PGL.RGBA, PGL.UNSIGNED_BYTE, null);

  // Init fbo, rbo
  fbo = IntBuffer.allocate(1);
  rbo = IntBuffer.allocate(1);
  pgl.genFramebuffers(1, fbo);
  pgl.bindFramebuffer(PGL.FRAMEBUFFER, fbo.get(0));

  pgl.genRenderbuffers(1, rbo);
  pgl.bindRenderbuffer(PGL.RENDERBUFFER, rbo.get(0));
  pgl.renderbufferStorage(PGL.RENDERBUFFER, PGL.DEPTH_COMPONENT24, envMapSize, envMapSize);

  // Attach depth buffer to FBO
  pgl.framebufferRenderbuffer(PGL.FRAMEBUFFER, PGL.DEPTH_ATTACHMENT, PGL.RENDERBUFFER, rbo.get(0));    

  pgl.bindTexture(PGL.TEXTURE_CUBE_MAP, envMapTextureID.get(0));     


  // Load cubemap shader.
  cubemapShader = loadShader("cubemapfrag.glsl", "cubemapvert.glsl");
  cubemapShader.set("cubemap", 1);

Note the PGL calls should always be enclosed by beginPGL()/endGL(). An even more interesting situation arises when rendering the faces of the cubemap. The method requires to render the scene 5 times from each direction: right, left, top, bottom, and up (down is not needed because is not visible on the dome), into the corresponding faces of the cubemap. Each time the perspective and camera need to be properly set in order to capture the scene from the right direction and with the right viewport and field-of-view. The cubemap is bound as the color buffer of a Framebuffer Object (FBO), which requires again calling low-level PGL functions. The result is a mixture of PGL and standard Processing functions inside a beginPGL()/endGL() block:

void regenerateEnvMap() {    
  PGL pgl = beginPGL();

  // bind fbo
  pgl.bindFramebuffer(PGL.FRAMEBUFFER, fbo.get(0));

  // generate 6 views from origin(0, 0, 0)
  pgl.viewport(0, 0, envMapSize, envMapSize);    
  perspective(90.0f * DEG_TO_RAD, 1.0f, 1.0f, 1025.0f);  
  for (int face = PGL.TEXTURE_CUBE_MAP_POSITIVE_X; face < 
                  PGL.TEXTURE_CUBE_MAP_NEGATIVE_Z; face++) {

      camera(0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f);
    } else if (face == PGL.TEXTURE_CUBE_MAP_NEGATIVE_X) {
      camera(0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f);
    } else if (face == PGL.TEXTURE_CUBE_MAP_POSITIVE_Y) {
      camera(0.0f, 0.0f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, -1.0f);  
    } else if (face == PGL.TEXTURE_CUBE_MAP_NEGATIVE_Y) {
      camera(0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f);
    } else if (face == PGL.TEXTURE_CUBE_MAP_POSITIVE_Z) {
      camera(0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, -1.0f, 0.0f);    

    scale(-1, 1, -1);
    translate(-width * 0.5f, -height * 0.5f, -500);

    pgl.framebufferTexture2D(PGL.FRAMEBUFFER, PGL.COLOR_ATTACHMENT0, face, envMapTextureID.get(0), 0);

    drawScene(); // Draw objects in the scene
    flush(); // Make sure that the geometry in the scene is pushed to the GPU    
    noLights();  // Disabling lights to avoid adding many times
    pgl.framebufferTexture2D(PGL.FRAMEBUFFER, PGL.COLOR_ATTACHMENT0, face, 0, 0);


This approach seems to work well and it’s a nice example of the recent improvements in the P3D renderer geared towards simplifying GL coding inside Processing (as I’m writing this post, I found that the DomeProjection sketch has been already ported to ruby-processing :-) ).

Another thing that should be noted about this example is that the cubemap vertex:

uniform mat4 transform;
uniform mat4 modelview;
uniform mat3 normalMatrix;
attribute vec4 vertex;
attribute vec3 normal;
varying vec3 reflectDir;
void main() {
  gl_Position = transform * vertex;    
  vec3 ecNormal = normalize(normalMatrix * normal);
  vec3 ecVertex = vec3(modelview * vertex);
  vec3 eyeDir =;
  reflectDir = reflect(eyeDir, ecNormal);

and fragment shaders:

uniform samplerCube cubemap;
varying vec3 reflectDir;
void main() {
  vec3 color = vec3(textureCube(cubemap, reflectDir));
  gl_FragColor = vec4(color, 1.0);      

don’t contain any preprocessor constant indicating the shader type (COLOR, TEXTURE, etc). This is due to an internal change in the OpenGL renderer in 2.0.3: when the shader type is not specified, the renderer will assume that the shader is of COLOR type. The shader does sample the cubemap texture however, but since this is not a sampler2D, the shader doesn’t need to be considered of TEXTURE type, which is the required type to render regular texture images. It is also worth noting that the vertex shader acceses the uniform normalMatrix and the attribute normal. These variables weren’t available in COLOR shaders previous to 2.0.3, but now they are, together with the corresponding texture coordinates (texCoord), which allows to define more complex effects when no lights or images from Processing are needed in the shader.


Planetarium library for Processing 2.0.3+

Although the DomeProjection is a faithful port of the original template, it requires to put the code that renders the scene inside not the draw() method, but inside drawScene(), which is consequently called 5 times per frame. In addition to that, every dome projection sketch needs to carry the CubeMapUtils functions and associated variables. The obvious solution to make dome projection more accessible would be to encapsulate all the cubemap logic in a library. This is precisely what the planetarium library (GitHub repo) does, with the added benefit that the scene-rendering code can be placed inside the draw() function as in any other Processing sketch. The library internally triggers the draw() function 5 times per frame, with the appropriate camera parameters each time, but the user doesn’t need to worry about that. So now, a simple dome projection demo can be reduced to just the following:

import codeanticode.planetarium.*;

float cubeX, cubeY, cubeZ;

void setup() {
  size(600, 600, Dome.RENDERER);

void pre() {
  cubeX += ((mouseX - width * 0.5) - cubeX) * 0.2;
  cubeY += ((mouseY - height * 0.5) - cubeY) * 0.2;

void draw() {

  translate(width/2, height/2, 300);


  translate(cubeX, cubeY, cubeZ);  

Notice the use of the pre() function, which is called only once per frame, before the execution of draw(). When using the Planetarium library, this method becomes handy to put code that we wouldn’t like to execute several times per frame, such as the update of the coordinates of the geometry or some other variables in the program. Similarly, the post() function can be used to run code just once per frame after all the drawing has been concluded. This library is of course an starting experiment on dome projection, and as such limited (no multi-projector setups supported for instance) and unfinished in terms of the API. So, contributions and suggestions are welcome!


The image above is a screen capture of the GravitationalAttractionDome example included in the library, which adapts Daniel Shiffman’s Gravitational attraction example for dome projection.


Posted September 6, 2013 by ac in Art projects, Programming

Tagged with , , , , , ,

5 responses to “Dome projection

Subscribe to comments with RSS.

  1. Just thought you might like this variation on the basic sketch running in ruby-processing (no need for reflection!)

  2. Rad! I just went to this event last month ( and there were tons of inspiring works, examples, frameworks, etc. Gonna check this out!

    • Hi Jesse, thanks for stopping by and leaving your comment! The Girls Club’s planetarium is a wonderful space, you should contact Dave Pentecost about the possibility of doing art projects in it.

  3. Thanks a lot for your work and that great summary of informations and links! This helped me a lot!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: