How to tell WebGL which size to show object in?

I’m starting out with webGL and have gone through some tutorials here:

I can take the first lesson as an example. In the function initBuffers() the buffer for a triangle is created like below:

function initBuffers() {
    triangleVertexPositionBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
    var vertices = [
         0.0,  1.0,  0.0,
        0.0, -1.0,  0.0,
         1.0, -1.0,  0.0
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
    triangleVertexPositionBuffer.itemSize = 3;
    triangleVertexPositionBuffer.numItems = 3;

The elements (or coordinates) in the vertex array goes from -1 to 1.

Now, what if I have coordinates with much smaller values like below:

var vertices = [

0.0207835, -0.0756165, 0.6492073,
0.02266111, -0.0711677, 0.6476236,
0.02020269, -0.08112079, 0.6501749];

They are not shown because (I assume) the values are too small. Where in the code do I tell webGl that it should “zoom” in so that the small-value vertices does become visible?

Thanks for help!

Well, many ways:

  1. Just make the vertices bigger.
  2. Use the vertex shader to multiply all of the vertex numbers by some value.
  3. More likely - create a 4x4 matrix describing how you would like your model to be transformed in 3D space.
  4. Ideally - create two or perhaps even three 4x4 matrices. One that transforms your model (let’s say it’s a car with the origin in the middle of the car) to position it into the virtual world. A second that transforms the virtual world relative to wherever you want your camera/eyepoint to be. A third that performs the perspective (or perhaps orthographic) transformation into “screen space”. These matrices are typically either multiplied together and then used to transform the vertices in the vertex shader - or perhaps applied to transform the vertices in three stages from “model space” to “world space” to “camera space” and finally to “screen space”.

This is a deep subject - really too much to describe adequately in a forum post. I would strongly recommend that you buy (and read!) the OpenGL “red book”. Not all of it applies to WebGL - but all of the stuff about how to position things using matrices is covered in wonderful detail. … 791&sr=1-1

Thanks for the reply!

I just needed something to point me in the right direction.

It all depends on what the intention of the resize is.

If you are looking to be able to resize the objects at will (i.e. you are going to be adjusting the object size dynamically) then implementing the re-sizing in the shader is probably a good idea.

If you are looking for a one time re-size to make objects drawn at different scales work together then I suggest doing the scaling on the vertices themselves before pushing them to the shader. This way you only need to scale them once.

I have a scale function (both a 2f and 3f so that I can scale textures or vertices) which I use when I first load the data from my external file. This way I can make objects of different scales work together but the shader does not need to scale the object each time.

It all depends on if the object size needs to be changed on-the-fly.