Vertices: from GL_FLOAT to GL_UNSIGNED_SHORT killed performance?

Hi Everybody!

I am building a game engine. My primary world geometry is currently fed to OpenGL using several arrays: coordinates, texture, etc.

My coordinates were simple [x,y,z] using GL_FLOAT, but to save space I decided to try GL_UNSIGNED_SHORT.

When I made that switch, my frame-rate dropped from ~150fps to ~40fps.

In retrospect, I realized I was feeding it six bytes per vertex, resulting in alignment problems so I changed the vertex data to [x,y,z,1], and got up to 80 fps.

But this is still just over 1/2 of what I was getting using GL_FLOAT. Is there some secret here, or is the on-the-fly conversion so expensive that it causes this issue?

Note: I am using shaders and none of the fixed-function pipeline.

I very much appreciate any insight or help :slight_smile:

Redaction: I was mistaken. The frame rates are exactly the same. It turns out that my frame rate flip-flops between the 80/150 number depending upon a random fluctuation in my game, and it just happened that several runs using FLOAT found the higher number, then several runs using SHORT found the lower one.

<whew> I was a bit concerned on this one :slight_smile: