I have some questions about what happens behind the scenes (i.e. at driver and GPU level) when you use vertex arrays.
I’m working on yet another terrain engine, and to save as much space as possible, I use the following vertex format:
• UV1: 2 GLshorts (first pair of texcoords)
• UV2: 2 GLbytes (second pair of texcoords, used for detail texture)
• Normal: 3 GLbytes (quantized vertex normal)
• Padding: 1 GLbyte
• XYZ: 3 GLshorts
This gives me a nice and small 16-byte structure, which can be aligned to DWORD boundaries in memory. Is it worth bothering with this if all you’re doing is throwing this array at the 3D card?
Also, I don’t actually use the normals for rendering. They’re only there for things like collision detection. Would it be noticably faster if I took them out of the vertex array and stored them separately?
On a related topic, just how does all this data get transferred to the card? If I specify each array separately (as opposed to using glInterLeavedArrays()), can the whole array still be transferred in one go?
And finally, notice that I used integer types for everything. Is there a speed difference between this and floats?