Yooyo,
thank you. I had thought of using VBOs, but I didn’t know I can use two simultaneously. Afaik I can’t, or am I wrong? I am using them for 3D objects (robots, player ships, powerups etc. already), so I know the basics about these.
Faces already get sorted by textures.
I wouldn’t know how to further optimize occlusion culling (which costs about 40% of the entire rendering). Currently the cuboids are walked, beginning at the viewer segments, cuboid faces are transformed and projected to determine what is occluded by them until it can be safely said that all further segments’ faces are occluded.
I am not using glGet…
Will look at glMultiDrawElementsEXT.
The vertex pointers are set per TMU. Don’t the additional TMUs need to know the vertices, too?
skynet,
I never said I’d come even remotely close to 100K polys @ 30 fps. I know other engines do, that’s why I am asking.
A typical Descent 2 mine has dozens and dozens of dynamic(i.e. moving, destructible, flashing) lights. There are often 16 or more lights affecting a single face (particularly during fire fights, which can spam the area with lights), and using less leads to lighting flaws. Blame it on the stone age engine. That’s why lighting takes so long: I have to determine the closest lights to each face (doing this per segment currently). That means: The less faces, the less work in this area, so I need software occlusion culling. I am already using precomputed lightmaps for static lights, but unless I have a stroke of genious (or use deferred lighting), I will not get around that type of light handling.
So while VBOs might help the draw calls, they wouldn’t really help much overall, given the draw calls only make up 25% of the entire rendering process.
As I said, even using VBOs for the 3D models only doubled their rendering speed. The only thing I know of I could apply here was face reordering to optimize the gfx hardware’s vertex buffer usage.
Edit: I was wrong. It’s almost 8 times faster.