Not much time to write on the blog these days, so things are piling up quickly. This post in particular is about a project from two months ago :-), when I participated in a Summer course at the Communication Design program at Konkuk University, Seoul. The class was lectured by Jihyun Kim and focused in making interactive applications with Android devices. I helped the students to use the Open Sound Control (OSC) protocol and sensors in Processing.Android. For most of them, this was their first experience developing on a smartphone, and the outcome was very engaging and fun.
Archive for the ‘Processing’ Tag
Thomas Diewald (who is also the author of the excellent kinect library dLibs_freenect, and many Processing pieces) recently created another library for generating realtime fluid simulations in Processing, using either the CPU or the GPU: diewald_fluid. The results are quite amazing, and it is also very fast, specially when using the GPU.
Hello! A few days ago I started working at Fathom, the information visualization studio run by Ben Fry. While here I will work on various data visualization projects, specially those requiring real-time, interactive graphics with OpenGL, as well as continue with my involvement with the development of the Processing language and environment. This job also encompasses an exciting collaboration with the lab of Pardis Sabeti at Harvard university, focused in the creation of new tools for visualizing epidemiological data and helping understand the factors that determine the origin and spread of various diseases. During my first week at Fathom I got started by working on an Android port of the “Stats of the Union” visualization, originally available for iPad tablets.
This new release of GLGraphics comes hand in hand with GSVideo 0.9, also available today. These new versions of the two libraries introduce a combined mode of operation that greatly improves video playback performance, which I will describe in the next post. Now I will focus on the new features of GLGraphics concerning exclusively with rendering and shading.
One of my motivations to develop Software Libre is the possibility to share knowledge and to create potentially useful tools, not only for myself but also for other artists and coders. So, it is very encouraging to see work being done with some of the tools I have been putting together during the last couple of years. Check the rest of this post to see some great projects made with Processing, GLGraphics, GSVideo, and Proscene.
In the the previous post I discussed the integration of OpenCL into Processing by means of its library mechanism. An early version of a new OpenCL-physics library, based on traer.physics, was used to simulate a couple of particle systems in real-time (N-Body and Springs). An interesting application of these new methodologies is to carry out GPU-acceleration of compute operations in Cytoscape. Networks of protein-protein interactions handled in Cytoscape usually involve thousands of nodes and edges. The visualization and analysis of such networks is computationally demanding, and the use of parallel processing on GPUs provides a way to cope with this complexity. So what I did as a new experiment with OpenCL and OpenGL was to use CLPhysics to simulate a force-directed network layout in Cytoscape.
This is an exciting time to work with real-time graphics, as the capabilities of GPUs keep increasing in raw performance and functionality, and also because the “GL family” of programming API’s (OpenGL, GLSL, OpenGL ES, JOGL) has been evolving rapidly during the past years in order to give access to new hardware features and devices. One area I have been interested in for some time is General Processing on GPUs (GPGPU). GPUs can allow for major speed-ups in computational problems that are suitable for data parallelization. Originally, it was possible to carry out GPGPU calculations (such as the simulation of a particle system) with the graphics API and writing the computation “kernels” using Cg or GLSL shaders. The major disadvantage of this approach was the need to cast a general computation algorithm in graphics terminology, i.e.: an array of particle positions becoming a texture, the output of a calculation being stored in a color variable, and so on. Despite these complications, many initial GPGPU projects were carried out in this way. I also implemented some simple particle systems for non-photorealistic rendering using OpenGL and GLSL shaders. Today, there are several API’s specifically designed to program the GPUs as general parallel processors, and among the most mature and widely used ones I would mention CUDA and OpenCL. I recently chose to learn OpenCL as it is a hardware-agnostic API aimed to support GPUs from different vendors as well as CPUs and other compute devices. Also, there are already several Java-bindings for OpenCL which are in an advanced stage of development (Jogamp’s JOCL, JavaCL, and JOCL.org). This opened-up the possibility of combining OpenCL and OpenGL in Processing in order to simulate and render large particle systems with full GPU-acceleration. Continue reading for the details (and also for some video renderings made possible by Syphon).