Using VBOs with large datasets


i’m trying to visualize a large triangular mesh which is constructed via constrained delaunay triangulation; about 10-12 Million triangles with color information for each vertex.
I first started using display-lists which was no fun at all, too slow and using to much memory (didn’t get it running with mesh sizes over 4M triangles).
With VBOs I’m able to draw about 8.6 Million triangles (using only one Buffer) on a Gforce 8800 (512Mb) and a Gforce 6600 (128Mb). Above this I get an (openGL) “out of memory” error after loading the data to the buffer (currently 435 Mb) with

glBindBuffer( GL_ARRAY_BUFFER, m_VBO); 
glBufferData( GL_ARRAY_BUFFER, m_numberOfPoints*(sizeof(Vert)), vertices, GL_STATIC_DRAW );

Vert is defined as follows

struct Vert{
	unsigned char r;
	unsigned char g;
	unsigned char b;
	float x;
	float y;
	float z;

rendering function looks like this

glEnableClientState( GL_VERTEX_ARRAY );	
glEnableClientState( GL_COLOR_ARRAY );
glColorPointer( 3, GL_UNSIGNED_BYTE, sizeof(Vert), (void*)NULL );
glVertexPointer( 3, GL_FLOAT, sizeof(Vert), BUFFER_OFFSET( 3*sizeof(GLubyte) ) );
glDrawArrays( GL_TRIANGLES, 0, m_numberOfPoints);
glDisableClientState( GL_COLOR_ARRAY );
glDisableClientState( GL_VERTEX_ARRAY );

Splitting to more vbos didn’t help. Again “out of memory”.

If I save the triangulation as vert-Array to hdd and then load it from hdd and directly push it to the video card memory (using one vbo) it works just fine.

So what causes this phenomenon?
Memory fragmentation on video card?

And most of all, what can I do to get rid of this?

I’m using Visual Studio 2005 on WinXP (32Bit) with OpenGL (nvidia sdk 9.5) and glew 1.5.1.

thanks in advance

With this approach you’re well outside the “expected usage pattern in OpenGL”. You should look into the techniques of “Level of detail”. Is there any point in rendering 12 million triangles if your framebuffer (screen) can only a few million at most?

Try and google around for “Terrain level of detail” and you will find a few places to start.

You could also try to reduce your memory footprint.

From what I see, you’re not using indexed triangles, chances are that you could store only unique vertices, and create a GL_ELEMENT_ARRAY_BUFFER of indices describing the triangles. Depending of the topology of the mesh you’re using, you could reduce your memory footprint - especially if you split your source mesh into submeshes having each <65536 vertices, to use GL_UNSIGNED_SHORT indices.

You could also look at extensions like GL_ARB_half_float_vertex, which would simply allow you to encode each vertex in 9 bytes instead of 15.

Thanks for the GL_ELEMENT_ARRAY_BUFFER-solution it works just fine…

Level of detail is also a good idea, but the resulting image will be rendered off screen in quite high resolution.