The Computer Graphics Forum 2006 Cover Image has been selected by the CGF editorial board. We thanks all people who submitted this year, and hope that they will participate to the next year contest.

Winner of the Computer Graphics Forum 2006 Cover Image Contest

Mario Sormann and Konrad Karner

VRVis Research Center, Vienna

The 3D reconstruction group at the VRVis Research Center, Austria developed a method to generate 3D models from a set of images with high redundancy. The 3D reconstruction is based on 4 consecutive steps, namely automatic orientation, multiple image segmentation, dense matching and multiview texturing.

Second place(ex aequo):

Chris Chiu

CG Club Institute for Computer Graphics and Algorithms Vienna University of Technology

The image depicts the fields of engineering/computer science (depicted by the gear), art (depicted by the brush and palette), physics (depicted by the atom), and mathematics (depicated by the math symbols), and in the center, as an "igniting spark" for this multidisciplinary universe, an eye that symbolizes computer graphics.
The image started with the idea of pointilism, many little dots to create a whole image. For this purpose, I wrote a little realtime rendering program in OpenGL that outputs a particle field based on an image. That base eye image was drawn traditionally, with pencil and paper (and digital inking/coloring).
The particles are randomly spread using a certain randomizer term, creating the appearance of a vortex. A radial postprocessing glow effect was layered over it. The background as well as the 3d models were created in common 3d content creation applications, and then blended into the image.

Carsten Dachsbacher

Computer Graphic Group, University of Erlangen-Nuremberg

Some details, how these images were generated:
- they are rendered on a commodity PC with a GeForce 6800 graphics board in real-time
- the elevation data is taken from the well known Puget Sound data set, showing Mount Rainier
- the terrain is textured procedurally during runtime, i.e. the user can interactively modify the texturing (e.g. distribution of grass, rock etc.).
The method used for texturing and rendering (developed by me and Marc Stamminger) will be published in the Shader X4 book. It uses a dynamically build quad-tree for storing textures, which are generated and updated on the fly depending on viewing parameters, lighting conditions etc. Water properties (scattering coefficients, water surface movement etc) as well as atmospheric conditions can be controlled by the user.
- the rendering is performed in full high-dynamic range, afterwards a global tone-mapping operator and (optionally) bloom filtering is applied to achieve the final image for displaying.