I have loaded an 8bit grayscale .raw file into an array. The size of the .raw file is 1024x1024, which is the same size as my array. I am using one dimensional array.
I have already implemented my heightmapped terrain using GL_QUADS, but would like to use GL_TRIANGLE_STRIP instead, so my question is:
How do you grab the correct triangle-strip values from my array?
(width * height) + (offset)
width is the width of your image (duh) and height is the row you want to access. Offset is going to be the x component you’re searching for.
so if you had an 8x8 image (64 member array) and you wanted the member that would be at [3][2] if it were a 2d array, you would take
8 x 2 + (3) which would yield 19.
so yourArray[19] is what you’re after
if you sketch out an 8x8 graph and label each box starting at 0, you can see what I mean.
Also, depending on your application, if you know your array will be fixed size, yeilds:
Value = x + ( y << 10 );
Quicker than a multiply - some compilers will optimise for this anyway.
However, I wouldn’t personally advise setting things up for fixed images sizes, unless you always knows its going to be 2^n. In which case, use x + ( y << n ).
Ugh! Lots of possibiles. You can flip your heightfield, or you can generate the vertices in the opposite direction - it all depends on the values you are getting (assuming you are doing some kind of lerp between two points to get the patch vertex x\z positions).