Vertex Buffer Objects

For my terrain renderer I’m trying to implement VBOs to speedup the rendering.

This is the first time I’m using VBOs, so I’ve some questions to ask to the OGL gurus :smiley:

I’m rendering my heightmap in triangle strips, as we can see in this code piece:


	for (int z = 0; z < tsize-1; z++)
	{
		glBegin(GL_TRIANGLE_STRIP);

		for (int x=0; x < tsize-1; x++)
		{
			v_height = hmap.GetValue(x,z);	
			scaledHeight = v_height * h_scale;

			glNormal3fv (normals[x*tsize+z]);	
			glVertex3f ((GLfloat) x,  scaledHeight, (GLfloat) z);

			v_height = hmap.GetValue(x,z+1);		
			scaledHeight = v_height * h_scale; 
			
			glNormal3fv (normals[(x*tsize)+z+1]);
			glVertex3f ((GLfloat)x, scaledHeight, (GLfloat)z+1);
		}
		glEnd();
	}

Should I store all the strips in a single buffer? and how I can access each tri-strip for render?

Thx!

There are several solutions, but I think you’d better put all your strips in the same buffer. You can use the function glDrawElements or glDrawRangeElement to specify the starting index of each lists. (it’s the last argument that allows you to specify the offset).

You could also add invalidated triangles (by repeating the same index twice) in your strips. It will connect all your strips together and then only one call to glDraw[Range]Elements will be needed.

The best solution is to convert the triangle stripes into a index. The last 24 vertices can be used from the cache.
A more efficient variant for terrain would be to render many (up to 16 or 24) triangle stripes parallel, because in that case the most vertices will be transformed on a single time. (The 5 other times are cached)

When rendering terrain, you will most probably end up fillrate limited anyway, so I don’t think it makes a lot of sence to overoptimize the vertex data. I would just use indexed triangles or, maybe, a tri-strip.

Well I managed to render the terrain through VBOs and the frame rate really improved. Altough I think i must fix the range of arrays to render because I’m getting the terrain with a seam of unrendered triangles.
My question is how to assign the normals to the vertices? I generated the normals and I’ve them nicely stored into an array.

This is what I’m doing, any opinion ?

Buffer filling with vertices:


void CTerrain::GenerateMesh(void)
{
	numvtx = tsize*tsize;
	vtxdata = new GLfloat[numvtx*6];
	
	for (int z = 0; z < tsize-1; z++)

		// triangle strip start...
		for (int x=0; x < tsize-1; x++)
		{
			vtxdata[((z*(tsize-1)+x)*6)]   = (GLfloat) x;
			vtxdata[((z*(tsize-1)+x)*6)+1] = (GLfloat) (hmap.GetValue(x,z)*h_scale);
			vtxdata[((z*(tsize-1)+x)*6)+2] = (GLfloat) z;
			vtxdata[((z*(tsize-1)+x)*6)+3] = (GLfloat) x;
			vtxdata[((z*(tsize-1)+x)*6)+4] = (GLfloat) (hmap.GetValue(x,z+1)*h_scale);
			vtxdata[((z*(tsize-1)+x)*6)+5] = (GLfloat) z+1;
		}	

		// triangle strip end...
}	

VBO Setup:


void CTerrain::BuildVBO(void)
{
	assert (vtxdata);
	glGenBuffers (3, VBO_buffer);
	glBindBuffer (GL_ARRAY_BUFFER_ARB, VBO_buffer[VERTEX_BUFFER]);
	glBufferData (GL_ARRAY_BUFFER_ARB, numvtx*6, vtxdata, GL_STATIC_DRAW);
	glBindBuffer (GL_ARRAY_BUFFER_ARB, VBO_buffer[NORMAL_BUFFER]);	
	//glBufferData (GL_ARRAY_BUFFER_ARB, normals, );
	glVertexPointer (3, GL_FLOAT, 0, vtxdata);
}

Rendering:


glEnableClientState(GL_VERTEX_ARRAY);

	for (int z=0; z < tsize-1; z++)
		glDrawArrays (GL_TRIANGLE_STRIP, z*tsize, tsize);

	glDisableClientState(GL_VERTEX_ARRAY);

Your code is not correct:

You’re not using vbos… You must provide a NULL pointer in the function glVertexPointer:
glVertexPointer (3, GL_FLOAT, 0, NULL);

It’s the datas contained in your vbo that must be used, not your buffer located in ram.

You must call glVertexPointer just before the call to glDrawArray. It’s useless to call it after the vbo initialisation:

glBindBuffer(GL_ARRAY_BUFFER_ARB, VBO_buffer[VERTEX_BUFFER]);
glVertexPointer (3, GL_FLOAT, 0, NULL);

glBindBuffer(GL_ARRAY_BUFFER_ARB, VBO_buffer[NORMAL_POINTER]);
glNormalPointer (3, GL_FLOAT, 0, NULL);

for (…)
glDrawArrays(…).

Oh thanks, I 'll try that…

I can’t get this damn thing to work…! :mad:

Any help… this is my code… (compressed into one chunk…)


// Mesh generation

numvtx = tsize*tsize*6;
	vtxdata = new GLfloat[numvtx];
	
	for (int z = 0; z < tsize-1; z++)
		// triangle strip start...
		for (int x=0; x < tsize; x++)
		{
			vtxdata[(z*(tsize-1)+x)*6]	   = (GLfloat) x;
			vtxdata[((z*(tsize-1)+x)*6)+1] = (GLfloat) (hmap.GetValue(x,z)*h_scale);
			vtxdata[((z*(tsize-1)+x)*6)+2] = (GLfloat) z;
			vtxdata[((z*(tsize-1)+x)*6)+3] = (GLfloat) x;
			vtxdata[((z*(tsize-1)+x)*6)+4] = (GLfloat) (hmap.GetValue(x,z+1)*h_scale);
			vtxdata[((z*(tsize-1)+x)*6)+5] = (GLfloat) z+1;
					
		}	

// vbo setup
glGenBuffers (3, VBO_buffer);
	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[VERTEX_BUFFER]);
	glBufferData (GL_ARRAY_BUFFER, numvtx*sizeof(GLfloat), vtxdata, GL_STATIC_DRAW);

// rendering loop

glEnableClientState(GL_ARRAY_BUFFER);
	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[VERTEX_BUFFER]);	
	glVertexPointer (3, GL_FLOAT, 0, 0);

	for (int z=0; z < tsize - 1; z++)
		glDrawArrays (GL_TRIANGLE_STRIP, z*tsize, tsize);

	glDisableClientState(GL_ARRAY_BUFFER);


You should use
glEnableClientState(GL_VERTEX_ARRAY);
instead of:
glEnableClientState(GL_ARRAY_BUFFER);

Use the OpenGL error checking mechanism to avoid this kind of mistake. As it is right now it should have generated an error.

Also, you’re better of using indexed vertex lists instead of repeating your vertices (less memory, better caching,…)

N.

Thanks Nico!

Maybe It’s fairly easy to provide a vertex array fallback for those users with very old hardware? I don’t know if it’s worth (I believe most cards from 2003 onwards support GL_ARB_vertex_buffer_object…)

thanks again.

I noticed you’re planning to use GLSL shaders so it’s probably not worth to implement the fallback path as hardware that supports GLSL shaders also support vertex buffer objects. On the other hand, it’s fairly easy to implement the fallback path in object oriented programming languages.

N.

Well, I’ve normals and vertex data in VBOs … the problem now (i’ve worked hard to get fixed!! :mad:) is that i’m getting a seam… seems that two strips are not being rendered!

If you look at the code every strip is rendered in this form:

v1 v3
|\ |
| \ |
| \ |
| | \
v0 v2 v4

It worked without VBOs , but not with glDrawArrays.

Here’s my ugly code:


void CTerrain::GenerateMesh(void)
{	
	vtxdata = new GLfloat[numvtx];
	
	for (int z = 0; z < tsize - 1; z++)
		// triangle strip start...
		for (int x = 0; x < tsize - 1; x++)
		{
			vtxdata[(z*tsize+x)*6]	= (GLfloat) x;
			vtxdata[((z*tsize+x)*6)+1] = (GLfloat) (hmap.GetValue(x,z)*h_scale);
			vtxdata[((z*tsize+x)*6)+2] = (GLfloat) z;
			vtxdata[((z*tsize+x)*6)+3] = (GLfloat) x;
			vtxdata[((z*tsize+x)*6)+4] = (GLfloat) (hmap.GetValue(x,z+1)*h_scale);
			vtxdata[((z*tsize+x)*6)+5] = (GLfloat) z+1;					
		}	

		// triangle strip end...*/
}	

and rendering …


glEnableClientState(GL_VERTEX_ARRAY);
	glEnableClientState(GL_NORMAL_ARRAY);

	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[VERTEX_BUFFER]);	
	glVertexPointer (3, GL_FLOAT, 0, 0);

	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[NORMAL_BUFFER]);
	glNormalPointer (GL_FLOAT, 0, 0);

	for (int z=0; z < tsize; z++)
		glDrawArrays (GL_TRIANGLE_STRIP, z*tsize , tsize);

	glDisableClientState(GL_VERTEX_ARRAY);
	glDisableClientState(GL_NORMAL_ARRAY);

Any guidance…? Thanks!

“seems that two strips are not being rendered”
=> which one ? the last ?

How do you initialize your vbos ?

x should range from 0 to tsize, not tsize-1.

N.

Nico, now x=0…tsize and z=0…tsize and like you see, one or two strips are not being rendered:

This is the VBO Setup:


void CTerrain::BuildVBO(void)
{
	assert (vtxdata);
	assert (normals);
	glGenBuffers (3, VBO_buffer);
	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[VERTEX_BUFFER]);
	glBufferData (GL_ARRAY_BUFFER, numvtx*sizeof(GLfloat), vtxdata, GL_STATIC_DRAW);
	delete[] vtxdata;

	glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[NORMAL_BUFFER]);
	glBufferData (GL_ARRAY_BUFFER, numvtx*sizeof(GLfloat), normals, GL_STATIC_DRAW);
	delete[] normals;		
}

Of course it’s far more efficient to put all strips into one array and draw them with one call, but I don’t know how to setup this.

If you’re storing your data in separate patches, you have to share one column/row of data between your patches, otherwise the problem will stay.

Have a look at the spec and try out GL_ELEMENT_ARRAY_BUFFER_ARB with glDrawElements,

N.

You should bind a null buffer after your vbos initialisation:

glBindBuffer (GL_ARRAY_BUFFER, VBO_buffer[NORMAL_BUFFER]);
glBufferData (GL_ARRAY_BUFFER, numvtx*sizeof(GLfloat), normals, GL_STATIC_DRAW);
delete[] normals;
glBindBuffer(GL_ARRAY_BUFFER, 0)

But it probably won’t solve your problem. You shouldn’t use strips. Triangles are easier to use. But you must use “index buffer” (GL_ELEMENT_ARRAY_BUFFER):

Initialisation:

  1. Init your “index buffer”
  2. Init your “vertex buffer”
  3. Init your “normal buffer”

Rendering:

  1. VertexBuffer : Call BindBuffer and VertexPointer
  2. NormalBuffer : call BindBuffer and NormalPointer
  3. IndexBuffer: call BindBuffer(GL_ELEMENT_ARRAY_BUFFER, id)

glDrawElements(…)

Of course it’s far more efficient to put all strips into one array and draw them with one call, but I don’t know how to setup this.

I explained you how to do in my first reply…:
“You could also add invalidated triangles (by repeating the same index twice) in your strips”

Lastly, Google is your best friend, there are PLENTY of exemples speaking about vbos.

Also, make sure you get your indexing correct. As I see it you’re setting up triangle strips for z = 0…tsize-1. But you’re calling drawArrays for z = 0…tsize.

N.

Yes Nico, you’re right. But there is something else:

for (int z = 0; z < tsize - 1; z++)
// triangle strip start…
for (int x = 0; x < tsize - 1; x++) //You’re missing the last point

Should be:

for (int z = 0; z < tsize - 1; z++)
// triangle strip start…
for (int x = 0; x < tsize ; x++)

for (int z=0; z < tsize-1; z++)
glDrawArrays (GL_TRIANGLE_STRIP, z*tsize , tsize);

Isn’t that what I said in one of my previous replies? :wink: