How to Perform
Real Time BumpMapping
using OpenGL


This page shows a new, simple, but effective way to perform bumpmapping.
The techinique is described step by step, and it is also given a transcription of all "unusual" OpenGL coding used.
This page is a shorter web version of an Eurographics2000 article, by the same authors.

You will find in this page examples of bumpmapped images, but you can't really appreciate fast dynamic bumpmapping through static images: for a running example, you can also download a sample program here.

This page is better seen at full screen size (1024 pixel wide or better).

Contents:

Why bumpmapping

How to

Adding multiple materials

Final Notes

 


Why Bumpmapping

Texture mapping is a standard way to introduce, on a surface mesh, lots of pictorial details without adding any time-consuming geometry.

Bumpmapping has a similar purpose. This time, the shape detail itself is enhanced. Each texel of the bumpmap contains some information about the physical shape of the object in the corresponding point.

Bumpmapping can give the visual illusion of the presence in the surface of small bumps, holes, irregularities, carvings, engraving, scales, and so on; if managed efficiently, this can be achieved at a very small fraction of the rendering time that would be necessary for an object with similar characteristics modeled as geometry (that is, in the standard way, with thousands of faces).

Bumpmapping can also be used more drastically, to replace on a simplified mesh the detail that was present before simplification:

Figure 1: this St.Mattew head model is first simplified, reducing its geometry from 4 millions of faces (left) to just 5 hundreds (middle). The detail lost is then reproduced with an ad-hoc bumpmap (right). The resulting model is dynamically shaded, very similar to the original, but rendered incomparably faster.
Click on images for enlargement!

Note: this page does not describe how to compute a bump-map intended for this particular, or any other, purpose, but merely how to render it. A description of the BumpTexture synthesis approach can be found in this paper.


How to

The technique consists in a preprocessing phase and a rendering phase: in the preprocessing phase, the bumpmap is transformed in vaiuos way, and then saved to disk; from now on, the renderer will load the preprocessed bump-map rather then the original bumpmap.


DISPLAC.

MAP

NORMAL

MAP

INDEX

MAP






TABLE
OF
NORMALS














TABLE
OF
SHADES

SHADED
COLOR
MAP

RENDERED

IMAGE

Figure 2: overview of the proposed technique. Blue arrows represent preprocessing stages, red ones rendering stages. Orange squares are final preprocessed data, saved to disk at the end of preprocessing and loaded by the renderer.
Click on arrows for details!

Deflecting Normals

DISPLAC.

MAP

NORMAL

MAP

First of all, if the original bump-map is provided in the usual way, that is, as a displacement map (each texel is the signed distance from the polygon to the "bumped surface") then is converted into a normal map.
That is, for each texel, the normal of the face on which it lays is deflected (and normalized). There are many ways to do this, and since it is a preprocessing stage it is worth doing it accurately.
At the end of this process, each texel conisits of the (x,y,z) coordinates of the normal vector of the "bumped surface", in the same coordinate system of the object.


Normal Map quantization

NORMAL

MAP

INDEX

MAP






TABLE
OF
NORMALS

or, as an
implementation
shortcut:

NORMAL

MAP

TRUE COLOR

IMAGE

INDEX

MAP






PALETTE
(TABLE OF
NORMALS)

At this point, the normal map is quantized. Generally, one does not have to write a quantizer, because we can simply:

This shortcut allow also to inherit some useful effect, as dithering. Surprisingly, dithering works on normal quantization as well as on color quantization, enhancing the final result quality.

Possible sizes of the Normal Table, which affects both visual quality and rendering time, ranges from 32 to 16K entries. Values around 2048 are usually more than enough (the image on Figure 1 has a palette size of just 256 entries!).

Now, the preprocessing phase is over: save the final quantized normal map in any format you like, together with the palette. The texture coordinates of the object, and anything else, remain unchanged during all the preprocessing phases.

Light Model Computation

TABLE
OF
NORMALS

TABLE
OF
SHADES

During rendering, at each frame, the light model is computed: All the needed vector (directions of lights at infinite, view direction, and so on) are first rotated in the world coordinate system, and then used to compute the light model on each normal of the normal table, filling the corresponding entry of the shade table.

Depending on the type of rendering required, the values of the shade table can be of different types, like a plain RGB value to be rendered directly, or an alpha channel to darken an underlying color, with or without RGB values for the specular reflection component.
Light model is applyed in a explicit way, by the CPU, and the relatively low number of light normals keeps computation time low; therefore we have an almost complete freedom in the choice of the light model or of the special effect to be

used. Here are some examples:



Figure 3: Some example of lighting model choices, including lambertian, classical phong, environment mapping, discretized lambertian, chrome, phong variants, etc. -- Click on images for enlargement!

The Final Rendering

INDEX

MAP






TABLE
OF
SHADES

SHADED
COLOR
MAP

RENDERED

IMAGE

actually
implemented
as:

INDEX

MAP






TABLE
OF
SHADES

RENDERED

IMAGE

As you can see in the above schema, something existing can be usefully reused. In fact, indexing the index map, and than using it to render the object is time expensive, specially considering that the texture must be reloaded in the texture memory after being indiced (at each frame!).

A much more efficient rendering is achived by using the automatic indexing harware of paletted textures support. All there's is to do is to use the shade table as the used palette. This way the overhead added to obtain real time bump-mapping rendering is almost zero (click here for a quick guide on how to implement this with OpenGL).

Unfortunately, not every kind of graphic boards support paletted textures.. (click here for a quick guide on how to detect at run-time whether and to which extent paletted textures are supported). Note that the paletted textures support is a standard feature in most graphic hardware, but it is typically used to - and was originally intended for - save texture ram space, which is a less and less precious resource. Therefore, some recent graphic hardware systems do not support this feature. This could prove a short sighted policy, since paletted textures can be very useful, not only in the way described in this article, but also in many other real time procedural texture mapping techniques.

When a given graphic board lacks this particular support, performances may slow down considerably, because the left option in the above schema must be used. Still, in that case, the best way to do the indexing is to keep a copy to the index map in RAM, and reload it in texture ram indexing it automatically in the loading process (OpenGL code to do this here).


Adding Multiple Materials

NORMAL

MAP






MATERIAL

MAP

INDEX

MAP






TABLE OF
(NORMAL &
MATERIAL)

instead of:

NORMAL

MAP

INDEX

MAP






TABLE
OF
NORMALS

As we have seen, there a wide gamma of properties that can be assigned to a bumpmapped object, like specular brightness and color, multiple lights, diffuse color and other material characteristics. Still, with the technique described so far, those attributes are to be shared uniformily over all the surface of the object (however, this does not mean that the object has to be uniformely colored: it is possible to simply apply the shade-map over a color-texturized object, with a double pass rendering, in order to have both color and bump).

Sometimes we would like to have a non uniform distribution of material characteristics: for example, certian part of te object should reflect light as metal, other as plastic. While this is very difficulty achived with standard bumpmapping techniques, it is easy to exthend the present one to handle such situations. Suppose we have the required distibution of the material encoded in a "material map" (actually, it may well be calculated on the fly during bump-map construction, as a procedural texture). Rather than using the aforesaid normal quantization, we can quantize in one step both material and normal in a single index map.

The rest of the method proceed as usual, except that the light model computation now builds the color table processing each entry of the new normal&material rather then the old normal table; this simply means that the normal-to-color computation is parametrized over an extra, discrete parameter, i.e. the material.

As a result, we can use this eihter to have different material properties over the same object in a single extra pass (see in Figure 4, left), or to have color and bump in a single pass (see Figure 4, middle), or both (see Figure 4, right, where the black parts are both of a different color and different shininess).

What about the implementation shortcut (see here) consisting in the of use of standard color reducing software? It is still possible to do it, provided there's a small number of materials. You just have to map the material in the alpha channel of the rgb image. If you cannot use the alpha channel, have alook at the paper for an alternative solution.

Figure 4: those models use two different material coded in their index map together with normal; each uses the material field for a different purpose:

  • To the left, where first a color map is rendered then the shade map applied over it, different shininess values are set in the two materials, so the the wooden bunny seems half painted with a transparent paint.
  • In the middle, each material represent a different uniform color so that any object with a few number of uniform colors (like most plastic objects) can be rendered with bump and color with a single pass)
  • To the right, each the material is assigned both a different light model AND color.

Click on images for enlargement!


Final Notes

The bumpmapping technique described here was proposed in the article below.

It has several advantages, and also some drawbacks, compared to other approaches with similar purpose. For more info about the subject, including a brief overview of the state of the art, numerical results, a more detailed description of the technique, please see the companion paper.

Acknowledgements
The St. Mattew mesh is courtesy of the Digital Michelangelo Project, led by prof. Marc Levoy, Stanford University.


Papers

Real Time, Accurate, Multi-Featured Rendering of Bump Mapped Surfaces
Marco Tarini, Paolo Cignoni, Claudio Rocchini and Roberto Scopigno
Computer Graphics Forum (Eurographics Conference Issue), Blackwell Publishers, vol. 19(3), 2000, pp 119-130

Abstract
We present a new technique to render in real time objects which have part of their high frequency geometric detail encoded in bump maps. It is based on the quantization of normal-maps, and achieves excellent result both in rendering time and rendering quality, with respect to other alternative methods. The method proposed also allows to add many interesting visual effects, even for object with large bumb maps, including non-realistic rendering, chrome effects, shading under multiple lights, rendering of different materials within a single object, specular reflections and others. Moreover, the implementation of the method is not complex and can be eased by software reuse.

 

Preserving attribute values on simplified meshes by re-sampling detail textures pdficon.gif (245 bytes)
P. Cignoni, C. Montani, C. Rocchini, R. Scopigno, M. Tarini
The Visual Computer, Springer International, Vol. 15 (10), 1999, 519-539.
(see also:  CNUCE - C.N.R. Tech. Rep. CNUCE-B4-1998-017, Nov. 1998, pp.24.)