Say I have 16 texture units, each bound with a distinct texture object. Now a texture object has a GLuint handle, that’s at least 32-bits on most systems. How the heck do I place all those GLuints into a key according to which I then sort the batches? I’d understand, if I were to simply pack the GLuints into a, say, 64-bit integer, but 1648=512. One idea:
first sort according to first texture object,
then sort the sets from the first step according to the second texture object
This would make for many sorts, so there must be a better way. I could also use a BigInteger (say, from the gmp library) class, but that would probably result in horrible slowness. Please, share some nice idea on how to do it.
I’ve got an idea:
I can put the texture objects into an array and do memcmp on the array for comparisons. Another option is hashing. Some other ideas?
You don’t have to use the GLuint handles directly when sorting. You can make an array of the 16 handles, where 4-bit integers are used as indices. Then you can do your sorting in respect of those indices.
Hmm, good idea, each frame I make a list of lists of texture handles and use n-bit integers to index it. The number of elements of this list per frame then must not exceed 2^n.
No, not a good idea, what if there are more than 16 textures in the app? Then the index gets longer than 4 bits and your scheme breaks down.
Another question… Is there any workaround possible to avoid having to associate a model transform matrix with each batch?