Question about VBO

Hi

Is it possible to store as VBO only vertices data, but indexes array to keep and manage as regular array by myself?

My reason for this is: I want to store terrain coordinates on GPU since they are static and relatively large storage while my index array is dynamic and small and I update it every chunk, so I dont see sense to put in on GPU.
I tried to do it , but it’s crushing, but maybe there is another reason for it, so I wonder if “my approach” is not valid?

TIA

Yes.

Thanks

Can I also do something like this( I’m putting in pseudo code)?

For each Row
glBegin(GL_TRIANGLE_STRIP);
for each Column
glArrayElement(valid_index1);
glArrayElement(valid_index2);
end column
glEnd();
end row

Problem is that it works without buffer object, but when I add buffer object it crashes.
I’m not adding all the source code since buffer object adding is very easy and I think I’m doing it right
Crashing happens on line glEnd()

Thanks in advance

It should work (though its a little perverse).

Now it works… Well ,I know it’s silly , but at least I see something working. I’m trying to improve it.
Now I’m trying to do like previous but with glDrawRangeElements(…) for each row, but I get terrible flicker of lines in all directions.When I switch back to previous method everything is ok. Indexes are exactly the same in both cases.

I calculate indexes dynamically according to terrain node current LOD.
My code in pseudo code is like this exactly:

For each Row
NumIndexes = 0;
for each Column
calculate valid_index1
calculate valid_index2
LocalIndexArray[NumIndexes++] = valid_index1;
LocalIndexArray[NumIndexes++] = valid_index2;
end column
glDrawRangeElements(GL_TRIANGLE_STRIP,0,(NumIndexes-1),NumIndexes,GL_UNSIGNED_INT,LocalIndexArray);
end row

Is something wrong with this?

Thank you again.


For each Row
    NumIndexes = 0;
    for each Column
        calculate valid_index1
        calculate valid_index2
        LocalIndexArray[NumIndexes++] = valid_index1; 
        LocalIndexArray[NumIndexes++] = valid_index2; 
    end column
    glDrawRangeElements(GL_TRIANGLE_STRIP,0,(NumIndexes-1),NumIndexes,GL_UNSIGNED_INT,LocalIndexArray);
   end row

Is something wrong with this?

Is the max of all values of valid_index1 and valid_index2 across a single row == (Num_Indexes-1) and the min of all such values is 0? Every row? Doesn’t feel right. If not, that’s a problem.

Also, seems strange to me that your number of vertices (end-start+1) used in the attribute arrays is only 1 less than your number of indices in the index array. For triangle strips, that also doesn’t feel right.

Assuming this is a regular grid, I think you might have args 2 and 3 of glDrawRangeElements set wrong.

Other possible problems:

  1. your vertex attribute indices may be wrong,
  2. your vertex attribute array bindings and/or enables may not be right,
  3. the contents of your vertex attribute arrays may not be correct (specifically, the positions),
    4.the MODELVIEW or PROJECTION matrices may not be set up correctly, or
  4. the shader you are using (if applicable) may be doing something odd with the position math.

P.S. Use [ code ]…[ /code ] tags (without the spaces) to mark things you want the forum to indent literally, like code.

I mistyped instead of o start value should be RowOffset.

glDrawRangeElements(GL_TRIANGLE_STRIP,RowOffset,(NumIndexes-1),NumIndexes,GL_UNSIGNED_INT,LocalIndexArray);

Actually it works correctly when I use this function instead:

glDrawElements(GL_TRIANGLE_STRIP,NumIndexes,GL_UNSIGNED_INT,&LocalIndexArray[RowOffset]);

Weird…? I will check all the possible problem points you noticed and update again.
BTW is there any performance or other differences between glDrawElements and glDrawRangeElements?

Thank you very much

DrawRangeElements is allegedly slightly more efficient for the driver since it knows the range of elements to send to GL. This, again allegedly, leads to a speed up.
However, in reality, the difference between these two is so small it’s not even worth the effort typing the command into your compiler.
I remember performing benchmarks between all the different rendering APIs when I started my terrain engine upteen years ago. On those legacy cards when VBOs were just comming out as an extension there may have been an argument for nVidia memory optimisation coupled with DrawRangeElements. Nowdays not worth the effort; and I suspect nobody optimises their drivers for that anymore. You are much more likely to get a performance gain by changing the terrain algorithum than by tweaking the rendering API; so my advise is to spend the extra time investigating why and what you’re doing with the algorithum and forget about how to optimise the rendering.

I remember performing benchmarks between all the different rendering APIs when I started my terrain engine upteen years ago. On those legacy cards when VBOs were just comming out as an extension there may have been an argument for nVidia memory optimisation coupled with DrawRangeElements. Nowdays not worth the effort

Well that sort of depends what you were benchmarking. While glDrawRangeElements is not much of a performance win for straight-up rendering, it does tell OpenGL just where you’re rendering from. And that can be a big difference when you’re doing buffer object streaming, particularly if you’re not mapping the buffer with the unsynchronized flag. Or when you’re reading from one location in the buffer while doing a transform feedback or pixel read into another location. And so forth.

That’s not to say that you’re guaranteed anything by using it. However, using it also takes virtually no effort on your part, so you may as well.