Originally posted by remdul:
[b] Why isn’t there an easier method to render even spaced heightfields?
Brute force terrain rendering has become a good option, but you still need to upload everything as polygons, which costs a lot of memory. Wouldn’t it be better if you could store the heightmap in a vertex array/VBO or even texture, call one function with the min & max corner bounds, grid spacing and OpenGL renders it as triangles, automatically generating texture coordinates etc.
Or did I miss something crucial here? [/b]
One thing I have considered doing is creating regular static grid of vertices and tex coords, then provide the height field as a float vertex array (the height field) that is added to the static grid vertices in a vertex program to create the final positions of the height field.
Then as another step the static geometry is reused as a tile many times with a seperate modelview matrix for each tile, to place it in its final position, and then have a seperate height field array. This would reduce the amount of data required to be download and stored on the GPU.
The problem with this approach is that we’ll need to compute the normals somewhere, either in the fragment program or perhaps pass into the vertex progrm a vec3 array instead of the float array, with this vec3 encoding the height and gradients.
I have previously played with using vertex texturing to get the height field, but alas the performance sucked for large number of vertices and high res textures.
You could potentially use geomtry shaders, but I don’t think this will gain you much over just reusing a large static mesh.
The advantage of the above technique is that is won’t rely on bleeding edge features, as long as you have a vertex program support you are set.
However, I haven’t tried this technique yet… it should work just fine though, and with a should even be able to handle geocentric terrain databases if you encode the curvature and normal variation into the static mesh.
Robert.