I recently worked on a voxel model rendering project and had to worry about extracting maximum performance since my program was completely vertex limited.
Does storing index buffers in a VBO as opposed to directly to passing the array to glDrawRangeElements really improve performance? I heard someone who said that he profiled Doom III’s use of OpenGL and found no index buffer VBOs were used. Given the performance abuses of OpenGL that games use, I suppose there’s a reason.
Does anybody have a reference material on demand geometry caching. I implemented a quick and dirty system which caches iso surface meshes in VBOs, which almost doubled the frame rate. I know that dynamic geometry shouldn’t be cached, but I’m not sure if traditional caching methods (i.e. LRU replacement) apply to caching geometry.
Appreciate your input.
- It certainly depends on what you are doing. But in a game like Doom you usually have two cases:
a) You dynamically create indices to do one pass on them (ie. for one lightsource, or so). Usually you don’t do more passes, or at least not many passes, so storing the indices in a VBO doesn’t improve anything.
b) You render a complete VBO (ie. a creature), because if the thing is visible it is most certainly completely visible so doing more work to reduce the vertex-count is a waste of CPU cycles.
In this case you don’t create any indices but simply render the complete VBO.
So, i assume most games that use OpenGL (ehm, the Doom engine and MANY MANY more …) don’t make use of this feature. However, there are might be cases where it is useful.
- Again it depends. Caching dynamic geometry makes sense, especially, if it is a big amount, ie. for huge particle systems, because then the caching is faster than immediate rendering and from then on the rest can be done in parallel on the GPU. I got very good results from uploading big particle systems compared to immediately rendering them.
I don’t know how you understand “dynamically create indices” but, well, what i mean is, that you have a cache for say 10000 indices and then you check what geometry is visible and for every face (or batch of faces, depending on how you organize your data) you put all the indices into the cache. When the cache is full or you need to change a state (texture change or a different shader …) you “flush” the cache (glDrawRangeElements) and do the same for the next batch.
Of course you don’t really “create” the indices, because the data is uploaded already and therefore the indices for the vertices are already fixed, but you store the indices in the cache.
If you really want to decrease your vertex count, you need to do something like this, i don’t see a better way.
Sorry I just misunderstood you, I was believing that you calculate all the indices for a geometry that doesn’t have indicies (which is frequent with the models I work with).
Yes it can help, but you need to mesh for the cache size to get best results. Nvidia’s mesher does this, back in G-Force era days the cache size was something like 12 vertices. There was also a siggraph paper published on this about 4 years ago.
If you just index and strip with no attempt to reuse recently transformed verts then you probably won’t see a win and may take a hit.
I don’t know how big the vertex cache is these days.