I’m not convinced that using glDrawRangeElements with VBO improves the performance. I just tested it using VBO and it seemed that glDrawElements performed slightly better. (in terms of frame rates) Can anyone give me an example where i can utilize glDrawRangeElements to boost VBO performance? (the spec mentions that if the range falls into 16 bit integer, i can gain 2x speed. … I’m not sure what this means)

thanks

Originally posted by grunt123:

[…](the spec mentions that if the range falls into 16 bit integer, i can gain 2x speed. … I’m not sure what this means)

this is easy to explain, ie. you have a trinagle with indices: 1 2 3

when it’s stored with 32 bit, your bytes look like (on x86-based systems):

index1. index2. index3.

01 00 00 00 02 00 00 00 03 00 00 00

when your indices are: 65533 65534 65535 your data looks like:

FF FD 00 00 FF FE 00 00 FF FF 00 00

so when the indicies are not bigger than 65535(hex 0xFFFF) the high-words of the inidices will never be used, so they can be skipped safely.

an optimized architecure would simply copy bytes #0,#1 skip #2,#3 copy #4,#5 skip #6,#7 and so on…

I am not sure there is much of an improvement w/ 16 bits + current hardware. VBO+interleaved array produced a 3x improvement with glDrawArrays. No perfomance benefit was gained by using short(16 bits) for indices + glDrawElements.

indices+glDrawelements work well only if you have a smaller number of vertices, and sort that in such a way that the gpu can cache the vertices. I suggest you focus on this to get best performance.