Tutorials


These tutorials are targeted at people with a reasonable web programming knowledge (HTML and JavaScript languages) and a basic web 3D programming knowledge (WebGL API).


Tutorial 9 : Model, Technique and Renderer

This tutorial gives an overview of the SpiderGL component called Model and of the related classes Model, Technique and Model Renderer, three useful tools created to facilitate operating with multiple meshes and multiple shaders.

View DemoDownload Source

In this lesson, differently from most of the previous, will be introduced three tools exclusive of the SpiderGL library, designed expressly to manage in an easier way multiple shader programs handling and multiple or complex meshes rendering on a single scene.
To achieve this goal is been created the dedicated library high performance component Model, directed to those WebGL programmers who writes computer graphics web applications a step ahead, and composed by three specialized class:

  • Model
  • Technique
  • ModelRenderer

The first two of these classes (Model and Technique) take advantage of the JavaScript flexibility to realize two special descriptor, by which are instantiated two appropriate objects containing the necessary informations to represents one or more geometric models and to handle respectively one or more shader programs.
The last (ModelRenderer) instead produce a SpiderGL object, that using the two entities above created (and a lot of useful own functions), handles the way in which the scene is drawn.

The combined use of all three the aforementioned classes can streamline and simplify much the source code structure of a WebGL application, as we will see later.

The utility of this system catches the eyes as said specially when we need to manage more that one instanced shader, or need to render at the same time various models, just as in this proposed example: in fact we will re-introduce here the tutorial 5 example (that one with the single colored cube on scene), this time drawing yet five animated cubes simultaneously.
So, taking as a reference exactly the code of that passed lesson, let's a look to how can change a source file introducing the use of the Model component.

As ever the relevant and interesting code news are placed in the "CanvasHandler.prototype" function, then as usual we can neglect the HTML structures and the other basic JavaScript functions.
The use of the methods present in the Model, Technique and Model Renderer, classes though leave unchanged the basic routines sequence, composed by the "onInitialize", "onKeyPress", "onAnimate" and "onDraw" stages, modifies in a relevant manner the source code.
In particular during the initialization phase will appear first the Technique and then the Model classes functions, in order to create the two objects that will allow (through the use of last class Model Renderer employed instead in the "onDraw" routine), to handle respectively the shaders and the models.

Therefore begin to analyse the code lines starting as usual from the initialization. At the top of the "onInitialize" routine there are the vertex and fragment shaders, perfectly identical to those of the example in the lesson number 5.
We can find the first relevant differences indeed in the code lines just following, those relating to the shader program declaration:

var vShader = new SglVertexShader(gl, {source: vsSource});
var fShader = new SglFragmentShader(gl, {source: fsSource});

var shaderProgram = new SglProgram(gl, {
  autoLink: true, 
  shaders: [vShader, fShader],
  attributes: {
    aColor: 1 
  }
});

As you can see in the block of code above, compared to the shader program definitions that we have been accustomed to seeing, has disappeared the (optional) default value for the uniform variable of the shader, but also the (necessary) index specification for one of the shader attribute variable (i.e. that one called "aPosition").

This was been possible because part of the informations latched to a shader program can be defined directly in a new JavaScript verbose structure (informally called descriptor), that will be afterwards one of the input parameters of the SpiderGL "technique" constructor (main method of the namesake class dedicated to the shader programs management).
The aim of this descriptor is (in this case) to explicit, through certain keywords (like "name", "program", "semantic", "vertexStreams", etc.), the specific parameters that characterize a shader program.
In this simple and intuitive way is possible to create a JavaScript object linked (by the own name) to every shader program earlier instantiate, defining in the express fields ("vertexStreams" and "globals") their attribute and uniform variables needed (or optional) informations, like attachment indices or default values:

var techDescriptor = {
  name     : "common",
  program  : shaderProgram,
  semantic : {
    vertexStreams : {
      "aPosition" : {
        semantic : "POSITION",
        index    : 0,
        value    : [ 0.0, 0.0, 0.0, 0.0 ]
      },
      "aColor"   : [ 1.0, 0.0, 0.0, 1.0 ]
    },
    globals : {
      "uModelViewProjectionMatrix" : { 
        semantic : "MVPM", 
        value    : SglMat4.identity() 
      }
    }
  }
};

As we can note in the above block of code, in this example, following a pure educational purpose, we have chosen an hybrid declaration of the attribute parameters.
In fact while for the "aPosition" attribute you have a full declaration (with semantic identifier name, index and default value), for that one used for the vertex colors, you only have the (optional) default value, having earlier (in the shader program definition) specified the index value, and using as semantic name that one by default, that for SpiderGL correspond (in case of attribute name compose from the "A" letter followed by something else), by the own attribute name minus the first letter (so, in this example the default semantic name for the attribute "aColor" will be "Color").
Naturally the programmer can freely choose to use a more or less homogeneous style for all the attribute definitions of his shader.

Once completed the drafted of the descriptor, is possible to pass this object directly to the Model component class "Technique" constructor.
The below shown constructor, taking by input the usual WebGL rendering context and the previously defined descriptor, instantiate a object that will be used in rendering phase to set and activate the shader progam handled.

this.sceneTechnique = new SglTechnique(gl, techDescriptor);

This done we have ended the work with the shaders (and so the use of the Technique class), and we can pass to instantiate the models that we will draw on the scene. This means to introduce the Model class, that as said will allow to manage the various meshes, and that works in the same way of the Technique class, i.e. first requiring a descriptor object (sure, this time setted for the geometric models), that then will be passed to a constructor with the aim to get the final SpiderGL model object.

The model descriptor (that as you can note scrolling the code is preceded by some usual JavaScript service list for vertex and index buffers), has the same philosophy of the shader program descriptor, or rather to specific parameters that characterize a mesh using selected keywords.
It can be split in semantic fields (as "data", "access", "logic", etc.), each one which a task different from the others.
Let's start to observe the first in this example used, i.e. "data":

var modelDescriptor = {
  data: {
    vertexBuffers: {
      "cubeVertexPositionBuffer": {
        type         : SGL_FLOAT32,
        glType       : gl.FLOAT,
        untypedArray : cubeVertices
      },
      "cubeVertexColorBuffer": {
        typedArray   : new Float32Array(cubeColors)
      }
    },
    indexBuffers: {
      "cubeIndexTrianglesBuffer": {
        typedArray    : new Uint16Array(cubeIndicesTriangles)
      },
      "cubeIndexLinesBuffer": {
        type          : SGL_UINT16,
        glType        : gl.UINT,
        untypedArray  : cubeIndicesLines
      },
    }
  },

As the name suggest, this field is used to build the basic data for a mesh: the vertex and the index buffer. Through the appropriate keywords "vertexBuffers" and "indexBuffers" can be setted the needed parameters, always exploiting the JavaScript verbose style: first specifying the buffer names, and then making explicit their functional options (like the data sources).
Also this time, for pure educational purpose, are here presented two different kinds of buffer allocation (both for vertex and for index buffer): one that use an untyped array (referring directly to the appropriate JavaScript list, yet specifying the "type" parameter), and another that employ an already typed array (allocated in place).

The "data" field don't need others explanations, so we can pass to the next, the more substantial "access":

  access: {
    vertexStreams: {
      "bufferedPositions": {
        buffer    : "cubeVertexPositionBuffer",
        size      : 3,
        type      : SpiderGL.Type.FLOAT32,
        glType    : gl.FLOAT,
        normalized: false,
        stride    : 0,
        offset    : 0
      },
      "bufferedColors": {
        buffer    : "cubeVertexColorBuffer",
        size      : 3,
        type      : SpiderGL.Type.FLOAT32,
        glType    : gl.FLOAT,
        normalized: false,
        stride    : 0,
        offset    : 0
      }
    },
    primitiveStreams: {
      "triangles": {
        buffer : "cubeIndexTrianglesBuffer",
        mode   : SGL_TRIANGLES,
        first  : 0,
        count  : 36,
        type   : SGL_UINT16,
        glType : gl.UNSIGNED_SHORT,
        offset : 0
      },
      "points": {
        buffer : "cubeIndexTrianglesBuffer",
        mode   : SGL_POINTS,
        first  : 0,
        count  : 36,
        type   : SGL_UINT16,
        glType : gl.UNSIGNED_SHORT,
        offset : 0
      },
      "edges": {
        buffer : "cubeIndexLinesBuffer",
        mode   : SGL_LINES,
        first  : 0,
        count  : 24,
        type   : SGL_UINT16,
        glType : gl.UNSIGNED_SHORT,
        offset : 0
      }
    }
  },

The "access" field, that, how you can guess by the name specifies how to access to the data, can be further subdivided into two sections: "vertexStreams" and "primitiveStreams". The first one is dedicated to the informations that will be used for the attribute variables attachment, while the second one contains the parameters setting for various primitives drawing modality (all these specific in the previous lessons were placed in the "onDraw" routine).
The way followed to define these informations is the same previously seen, the one that exploiting the descriptor verbose style first indicate the name of the buffer at issue, and then set the functional parameters.
In this example we are attaching the pointers to two vertex buffers (the first for the vertex spatial coordinates and the following for the vertex colors), and to three different index buffers (specifying three different kinds of primitives rendering).

The next model descriptor field is named "semantic", and reported here below:

  semantic: {
    bindings: {
      "mainBinding": {
        vertexStreams: {
          "POSITION" : ["bufferedPositions"],
          "COLOR"    : ["bufferedColors"]
        },
        primitiveStreams: {
          "FILL"   : ["triangles"],
          "POINTS" : ["points"],
          "LINES"  : ["edges"]
        }
      }
    },
    chunks: {
      "mainChunk": {
        techniques: {
          "common": {
            binding : "mainBinding"
          }
        }
      }
    }
  },

This field is dedicated to the binding, and like the previous, it also can be subdivided in two parts: the first one called just "binding" and the second one named "chunks".
In the first as the name suggests occurs the binding between the pointed vertex buffers and the related shaders attributes, and between the attached index buffer and an user selected identifier name (that will be useful later). The second one instead is used to connect the above just realized bindings with the right shader program.
All the links in this fields are defined by semantic associations among two names, reason from which derives this block name.

Finally the last section of our model descriptor is that one called "logic":

  logic: {
    parts: {
      "mainPart": {
        chunks: ["mainChunk"]
      }
    }
  }
};

This field has not particular utility for this example (where we have five geometric models drawn independently from each others), but is very important in the event you have on the scene a meshes composed by more different parts, because is the one appointed to collect together the various section of the model.
For our purpose is sufficient to join the "chunks" keyword to the previously defined "mainChunk" semantic binding and you are done.

With the "logic" field practically ends the model descriptor, and we are so ready to pass this entities to the SpiderGL Model constructor:

this.sceneModel = new SglModel(gl, modelDescriptor);

As seen before for Technique, this constructor take by input the WebGL rendering context and the descriptor, putting in output a SpierGL object that will be used in rendering phase to select the model to handle and to draw.

With the previous code line terminates the Model class use too, but not the "onInitialize" routine review.
Among the usual variables set out at the end of this stage in fact, there is a couple of assignations that it is worth noting better:

this.primitives     = "FILL";

this.angle          = -45.0;
this.rate           =  60.0;	

this.ui.animateRate =   0.0;

this.xform    = new SglTransformationStack();
this.renderer = new SglModelRenderer(gl);

The first one concerns the variable called "primitives" default value definition, here setted on "FILL", i.e. the keyword chosen by us in the semantic primitives stream binding of the model descriptor to indicate the rendering type with triangles as primitives (this variables will be that one then used to store the user keyboard selected value for the primitives to render and to set the same in rendering phase).
The second one instead introduces the first ModelRenderer class function call. In the last line of code in the above block in fact, using the "ModelRenderer" constructor (which belong to the namesake class and take by input the WebGL rendering context) is instantiated the SpiderGL object "renderer", that will be the one which will lead the rendering in the "onDraw" routine (as we will see later).

Well, this object creation close the initialization stage. The following would be the ones dedicated to the event listeners ("onKeyPress" and "onAnimate"), but no new code lines are presented here compared to the lesson 5, so we can jump over and go ahead (just note that the primitives change, related to the keys "1", "2" and "3" pressure, this time only involves the "primitives" variable, and not also the index buffers like in tutorial 5, thanks to the use of the Model component, that performs this change in dynamic and automatic way).

You arrive so to the last "onDraw" routine. The content of this stage is heavily modified than usual due to the ModelRenderer system introduction, that re-write a lot of basic functions or introduces news of them.
Scrolling down the code indeed can be found immediately the first minor changes: compared to lesson 5 were removed the viewport setting, some stack transformation on the model matrix , the shader program binding, and also the uniform shader variables setting (here replaced by a JavaScript variable allocation, with an appropriate property that point to the model view projection matrix).

All the removed functions have been moved forward in the code, in that part which represent the real innovation of this tutorial.
Thanks to the ModelRenderer object earlier instantiated, and to his numerous related methods, in fact is possible to realize a compact rendering structure (below reported) that, combining the use of the Model and Technique object too, can streamline and simplify a lot the source code during drawing (also in the presence of multiple meshes or multiple shaders):

renderer.begin();
  renderer.setViewport(0, 0, w, h);
  renderer.setPrimitiveMode(this.primitives);
  renderer.setTechnique(this.sceneTechnique);
  renderer.setDefaultGlobals();
  renderer.setModel(this.sceneModel);
  for (var i = 0; i < 5; ++i) {
    xform.model.push();
    xform.model.translate([-7.0 + i * 3.5, 0.0, 0.0]);
    xform.model.rotate(sglDegToRad(this.angle), [0.0, 1.0, 0.0]);

    globals.MVPM = xform.modelViewProjectionMatrix;

    renderer.setGlobals(globals);
    renderer.renderModel();
    xform.model.pop();
  }
renderer.end();

The new SpiderGL rendering structure, enclosed between the ModelRenderer methods "begin" and "end", contains all the function calls needed to set the shaders and the models, and so the virtual scene.
Looking more deeply the source code and proceeding with order, after the "begin" in this example we can find five renderer related setting call:

  • the SpiderGL wrapper for the viewport set up (same input that the classic WebGL function);
  • the primitive rendering setting (take by input the previously defined keyword stored in the "primitives" variable);
  • the shaders technique set up (through the technique object);
  • the uniform shaders variables default setting;
  • the model set up (through the model object);

Just after there are (inserted in a "for" cycle to render five models) the classic transformation stack operations (on the model matrix), and next the model view projection matrix related JavaScript object update, others two renderer function calls:

  • the uniform shaders variables setting update;
  • the final single model rendering;

After these last calls terminates the work related to the renderer object, and the "onDraw" stage can be closed.

As is easy to see from the above block of code, using the SpiderGL component Model the drawing of (even) complex scenes becomes a simple sequence of intuitive commands, that structure the source code closely related to the rendering, ordering and compacting it, so as to make the "onDraw" routine more streamlined and efficient.
More, the adopted descriptors method, thanks to his modular skeleton, makes more linear to instantiate and handles shaders programs and geometric models, so as to make also the "onInitialize" routine more organized and consistent.

This tutorial thereafter proves like the SpiderGL classes Model, Technique and ModelRenderer can help the WebGL programmers to codify complex scene, introducing a tool aimed at those of them who writes computer graphics web applications a step ahead.
It also closed this round up of lessons after which the WebGL user should have learned the basic structure and functionality of SpiderGL, and descry the great potentiality of this JavaScript library.
However, once more we would like to advise to browse the online documentation, so as to have a full overview of the complete usability of SpiderGL.