I’m looking into making a heightmap in openGL. I’ve got the data I need but it’s extremely accurate. At the minute I’m reading the data from a file into a vertex array, copying it into an index array and drawing it using GL_TRIANGLES in a GLDrawElements call. In this index array I have about 4.3 million elements. It looks like it’s drawing the data ok at the minute but I only have made-up test data so the output is just a mess really so I don’t know if it’s drawing it accurately. What is the maximum number of elements I would be able to display on screen without incurring a serious performance hit? Initial loading time is not really an issue, it’s the time delay when moving the camera that would be a problem, I would say a delay of about 2 seconds is about the limit. Any ideas? Thanks.
What is the maximum number of elements I would be able to display on screen without incurring a serious performance hit?
It depends on a lot of factors. The videocard in question is one of the primary ones. But there is also the complexity of the shaders you’re using, z-culling features, how much overlap there is in your scene, etc.
In short, there’s no simple answer for you without lots more information. And even then, the answer is guesswork. Best to try it and see what happens.
Just to give you an idea; High end cards can transform around 100-150M triangles per second (textured, with simple lighting). If you aim for 60 FPS you have a budget of around 2 million triangles per frame. If you use a regular grid without level of detail, your height map is limited to a size of around 1000x1000.
This is just vertex processing. Depending on what other techniques you use (Shading, Texturing, Shadows, Culling) these numbers will vary. As Korval says, there are many factors to consider…
Nicolai de Haan Brøgger
the most important thing to do is split the terrain up into chunks/blocks of 33x33 or 65x65 vertices
stick a bounding box around each chunk.
each frame test if the BB is with the viewfrustum if so draw.
performance will be better
A/ typically not all blocks are visable (thus less is drawn)
B/ hardware usually likes bitesize chunks, ie spliting up a huge data set into smaller pieces is usually faster