What should I do to upload large vertex meshes to the GPU without blocking host program and the rendering context?
My program generates a 3D heightmap from grayscale images several times per second. There are two bottlenecks: the analysis of the image and generation of the vertices/normals, and compiling the data as a display list and uploading it to the GPU.
- create a second thread for processing the data and a shared rendering context for uploading the vertices w/o blocking the primary context?
- replace the display list by a VBO?
- upload the image to the GPU and let it create the vertices and normals (is this possible with a geometry shader)?
What would you do?
Combine the last two options. Create a static VBO with a flat grid. Upload the heightmap to the GPU and in the vertex shader modify the vertex position and normal according to the heightmap data.
If the grid/terrain is large you will be better off using something like geometry clip maps. The basic idea is to have a few fixed grid “patches” and use them to cover the whole terrain, the main trick is in the way the patches are placed to avoid them shifting visibly when the camera moves, but still maintain highest resolution around the camera.
the vertex shader seems to be the logical place to adjust the height-coordinates, but how can the normals be adjusted? A vertex normal is the (possibly weighted) average of the face-normals of all the triangles that share the vertex. Since a vertex shader doesn’t have this information, how can it compute the normal?
ok, obviously I can use gradient or partial-derivative functions in the shaders to extract the normals from the height data texture itself. I should have thought a little more about that before implementing, but I’m still learning OpenGL…
Since you’re sampling height from a height map in the vertex shader you can calculate your normals by getting two more samples, then create 2 vectors and use the cross product to calculate the normals