WARNING: I am the incarnation of a nooblet. You have been warned.
So, I’m working on something like a grid-system for a game I’m trying to program from semi-scratch. When I say grid, think ‘pokemon red’ kinda stuff. So far, I’ve managed to get a grid running, sound, texture, etc. However, I’m stumbled by this problem: It only renders one picture across all of my squared grids. I tried to use ‘glDrawRangeElements’ in order to draw each square individually, but it keeps drawing the top-right one no matter what values I choose.
If the above sounds nonsensical, I’ll blame my lack of experience!
So, here is the essential part of the code:
glDrawRangeElements(GL_TRIANGLES, 6,12,6, GL_UNSIGNED_SHORT, NULL);
This should draw the second square, but instead draws the first square! What am I doing wrong? How can I do this as I want to?
I am using stbi_image.h, glfw and glew in windows 7 with the CodeBlocks programming environment. I’ve also done some work with openAL, and have included glm, but they are irrelevant to this problem.
In advance, thanks for your help!
PS: Links to extensive tutorials are always appreciated.
glDrawRangeElements is just glDrawElements with a hind about the minimum and maximum used index VALUE.
What you are looking for is glDrawElementsBaseVertex. It comes with GL3.2 or higher. (ARB_draw_elements_base_vertex)
If you also want to do do that without ARB_draw_elements_base_vertex, you have to rebind the index buffer to the position you want to start drawing.
It is best to settle on a minimum needed OpenGL version so you don’t have to worry to much about basic functionality not being supported.
[QUOTE=Osbios;1262343]If you also want to do do that without ARB_draw_elements_base_vertex, you have to rebind the index buffer to the position you want to start drawing.
Not really. You can pass the index through the buffer pointer, the same rules apply here as for setting an indes with glVertexAttribPointer, for example I am doing this to iterate through an index buffer with multiple draw calls:
for(unsigned i = 0; i <drawinfo.Size()-1; i++)
glDrawElements(GL_TRIANGLES, drawinfo[i+1].mStartIndex - drawinfo[i].mStartIndex, GL_UNSIGNED_SHORT, (GLvoid*)(intptr_t)(drawinfo[i].mStartIndex*2));
This is correct.
My understanding is that glDrawRangeElements is primarily an optimization for software T&L (or if you trigger a software fallback). While the spec language is of course very non-committal on this (defining it as implementation-dependent behaviour), it’s notable that D3D10+ has removed the equivalent call without any performance impact.
Thanks, it seemed to do the trick!
As I’m already using 3.2, it wouldn’t change too much for me. Besides, I’d like to keep the code somewhat simple atm, but thanks for the alternatives, I might use them in the future. =P
Oh well, that is what I get for only using OpenGL through my own c++ abstraction this days.
I basically was just explaining how to add a offset to the index value.