Using integer types to specify geometry

I have a lot of dense geometry to render.

Since the geometry is already chunked in a tile-like organization, I don’t need the full scale of a 32-bit float for my vertices. In some cases I could get by with a 16-bit short, or even an 8-bit unsigned byte (!). I would of course need to adjust the transformation matrices I’m using.

My questions are:

  1. Is this supported at all?
  2. Is it a “good idea” in general? Perhaps such usages are atypical, and are not handled in an optimized code path in the driver/GPU?

The idea is of course to save GPU memory. In addition, since the server could output a much more efficient binary encoding (I’m using binary XHR to load data), it would save network bandwidth as well. Thirdly, since I need to retain this data at the JavaScript side as well (to do window queries and iterate geometry), I would have JavaScript VM memory.

Comments welcome.

Yes, you can do it - and on most GPU’s it’ll help.

The trade-off is in the savings in memory and bus bandwidth from sending less data - versus the extra time it takes the GPU to convert the data to floating point for use in the vertex shader.

I have found this to be a net win in desktop OpenGL - but on things like phones…I honestly don’t know.

I think you should do it.