heightmaps

Why isn’t there an easier method to render even spaced heightfields?

Brute force terrain rendering has become a good option, but you still need to upload everything as polygons, which costs a lot of memory. Wouldn’t it be better if you could store the heightmap in a vertex array/VBO or even texture, call one function with the min & max corner bounds, grid spacing and OpenGL renders it as triangles, automatically generating texture coordinates etc.

Or did I miss something crucial here?

I hope I am not misunderstanding you question,
but i think, that the problem of not keeping it simple it’s because of the need of using LOD techniques to increase performance.

However there are a lot of varied opinions and new trends:

 quote:Originally posted by zeoverlord:
John Carmack said in his 2005 keynote speetch that the brute force ( = no lod ) method combined with good culling is probably equally as fast as any other lod method, not to mention simpler and have less problems.
I do agree with him on that, everything i have seen in games like WoW suggest that is the case. 

quote:Originally posted by Adrian: I strongly disagree with him on that…
We were just having a Terrain Rendering discussion and you are most welcome to join to this discussion: Opengl Terrain Forum Discussion :slight_smile:

Cheers,
Rod

Remdul,
I just explored your homepage.
I saw you engine. Pretty awesome.

You mentioned to have used no LOD, or culling technique, for your terrain rendering, only a Display List. Were you able to try the performance difference versus using LOD?

What programming enveiroment are you using? Visual Studio C++ 2005?

Congratulations for you engine, really nice work! :wink:
Cheers,
Rod

Why isn’t there an easier method to render even spaced heightfields?
So, let me get this straight. You want a specific rendering method, one whose uses are limitted to a very specific domain, to be baked into the GPU hardware and for OpenGL to provide a specific API for it?

And how exactly does this make sense?

You’re talking about a technique that, even when the application uses it, is only a portion of the fully rendered scene.

No, I’m sorry; I see no reason for this to become an OpenGL or hardware feature.

One word: geometry shader :slight_smile:

If it is possible to generate a limitless number of triangles and your G-shader can access a texture, the problem will be solved once and for all.

So, let me get this straight. You want a specific rendering method, one whose uses are limitted to a very specific domain, to be baked into the GPU hardware and for OpenGL to provide a specific API for it?
No. I was thinking about a more generalized feature, not limited to terrains. It could also be used for things like animated water surface rendering, grid data visualization etc. Even though it may be a specific domain, it is a domain threaded rather frequently.

And how exactly does this make sense?

You’re talking about a technique that, even when the application uses it, is only a portion of the fully rendered scene.
I certainly would not expect to upload a huge heightmap and OpenGL cull and render it efficiently.
Rather, you would have to cut the map up in reasonable portions (say 32x32) and frustum cull chunks on the CPU, so some form of LOD would be in place.

I was thinking about something like grid evaluators interface, minus the curved surfaces.
Rendering would be done through a single call, e.g. :

No, I’m sorry; I see no reason for this to become an OpenGL or hardware feature.
The point is that I believe it is a waste to double or tripple the amount of data (at best) to render height fields.
I’d like to emphasize that the issue lies more at data transfer/storage than the actual rendering itself. That is where there is room for improvement.

Do you have a better suggestion/alternative, Korval?

Originally posted by Zengar:
One word: geometry shader :slight_smile:
I do not know much about geometry shaders, but it sounds like a possible solution. Still, a more straighforward, general purpose, shaderless alternative, would be interesting (if only for simplicity) IMHO.

Originally posted by Rodrix:
Remdul, I just explored your homepage.
I saw you engine. Pretty awesome.

Don’t take that to seriously, I wrote that several years ago. That stuff looks better than it technically is. :wink:

If all you are worried about is memory usage/speed of upload here is what you can do:

Define a vertex buffer of heights (your example of 32x32) for all the different height you want to render - just a stream of single floats.

Define a buffer of the x/y spacing of the heightmap for each vertex in the grid (32x32 values). (eg 0,0 0,1 0,2 etc…)

Then to render - bind both the buffers as input then use a vertex program to construct the vertex position. (eg take the z from the first buffer, x/y from the second)

This way, you can reuse the second x/y buffer for all render calls, and you only have to upload the height values.

Do you have a better suggestion/alternative, Korval?
Yes. What Sqrt-1 said. Or wait for geometry shaders.

When rendering large terrains with brute force approach you will end up with lots of very smal polygons, and these are very ineffective to render. Not only because there is more vertices to transform per pixel but also because modern GPUs render many pixels simptaneously and with 5-10 pixel polygons this ability is wasted.
Since you are concerned about memory consumption, then I assume you’re talking about large terrains. These will always require LOD algorithms.
You can actually render terrains with large number of polygons and dynamic LOD using only one, small vertex array stored on the GPU with a very minimal CPU usage. It’s all about reusing the same vertex array with different transformations.

Originally posted by remdul:
[b] Why isn’t there an easier method to render even spaced heightfields?

Brute force terrain rendering has become a good option, but you still need to upload everything as polygons, which costs a lot of memory. Wouldn’t it be better if you could store the heightmap in a vertex array/VBO or even texture, call one function with the min & max corner bounds, grid spacing and OpenGL renders it as triangles, automatically generating texture coordinates etc.

Or did I miss something crucial here? [/b]
One thing I have considered doing is creating regular static grid of vertices and tex coords, then provide the height field as a float vertex array (the height field) that is added to the static grid vertices in a vertex program to create the final positions of the height field.

Then as another step the static geometry is reused as a tile many times with a seperate modelview matrix for each tile, to place it in its final position, and then have a seperate height field array. This would reduce the amount of data required to be download and stored on the GPU.

The problem with this approach is that we’ll need to compute the normals somewhere, either in the fragment program or perhaps pass into the vertex progrm a vec3 array instead of the float array, with this vec3 encoding the height and gradients.

I have previously played with using vertex texturing to get the height field, but alas the performance sucked for large number of vertices and high res textures.

You could potentially use geomtry shaders, but I don’t think this will gain you much over just reusing a large static mesh.

The advantage of the above technique is that is won’t rely on bleeding edge features, as long as you have a vertex program support you are set.

However, I haven’t tried this technique yet… it should work just fine though, and with a should even be able to handle geocentric terrain databases if you encode the curvature and normal variation into the static mesh.

Robert.

Yes. What Sqrt-1 said. Or wait for geometry shaders.
Geometry shaders are undoubtly interesting. Sqrt-1’s solution probably works fine (as do others), but I think its still not as efficient as it could or should be.

When rendering large terrains with brute force approach you will end up with lots of very smal polygons, and these are very ineffective to render. Not only because there is more vertices to transform per pixel but also because modern GPUs render many pixels simptaneously and with 5-10 pixel polygons this ability is wasted.
Good point. However, a stride parameter would allow you to render every N-th vertex row/column, so you could render at a lower LOD. And because you specify the min/max row/column extends you can leave a void between two chunks, to stitch two different LOD levels with triangles in any scheme you please. Layer a vertex shader on top and you can do geomorphing (among other things).

Since you are concerned about memory consumption, then I assume you’re talking about large terrains. These will always require LOD algorithms.
I never meant to give anyone the idea some form of LOD wouldn’t be involved, I’m sorry I wasn’t clear on this.

I wasn’t aiming specifically at large terrains (or terrains at all), any size really. If you have a really small terrain you might just render it in one call, if it works for you.
If you have a huge terrain you can chop it up in chunks, cull with a quad-tree, cache/uncache, or store it once in a very large VAR/VBO and only render what is visible etc. There are tons of papers on this, I don’t really want to go into that.
What I suggested doesn’t change the way you render things, it just makes it more (memory) efficient.

I believe that a cheaper way to render regular gridded heightfield (chunks) would be desirable. I think OpenGL would really benefit with such a feature (again, it is not just for terrains). Or is it just me?

I mean, why not? The alternatives are obviously there, they may do the job, but they are still complicated, inefficient and ‘hacky’.

All the GPU has to do is generate the X and Z component, fill Y with the height from a VAR/VBO, and pass it on like any other vertex. It is not unlike what automatic texture coodinate generation, or evaluators do.
I know this is a dangerous thing to say around here, but I think it wouldn’t even be hard to implement on hardware.

I’m not a pro, my apologies for my ignorance, but I just can’t think of a reason why there’s no function to render regular grids efficiently. If someone has a good argument why this feature would NOT be desirable, please reply so we can stop wasting our time in this thread. :slight_smile:

Originally posted by remdul:
All the GPU has to do is generate the X and Z component, fill Y with the height from a VAR/VBO, and pass it on like any other vertex. It is not unlike what automatic texture coodinate generation, or evaluators do.

Uh, you do realise that automatic texture coodinate generation is emulated by vertex programs nowdays and evaluators have always been software.

The ARB is not going to add such a feature that can be done using vertex/geometry programs to fixed function. (In fact in OpenGL3 “Pure” you won’t have any of this fixed function stuff - to use all the new features you are going to have to start using shaders)

Why isn’t there an easier method to render even spaced heightfields?
Personally, I think heightmaps stink.

Well, OK, they’re fast and simple. But that’s about it.

Unfortunately, being fast and simple doesn’t make heightmaps look any better.

But then again, they are … fast and simple.

Does this make any sense?

If someone has a good argument why this feature would NOT be desirable, please reply so we can stop wasting our time in this thread.
I don’t think it should work that way. It certainly shouldn’t be stated that way, unless you want to alienate and possibly irritate your audience. It most assuredly doesn’t work that way in the scientific community, for example. Researchers don’t just blurt out random claims and then ask the rest of the community to prove them wrong (or out of their minds). Granted, this forum is much less formal, but some measure of the scientific process would better serve your case in general, I should think. Demonstrate why we can’t live without this, how it simply can’t be done efficiently any other way, how it’s cheap and easy to implement, and so forth, and so on.

The way I see it is basically this: for any new feature added, the question is whether it calls for new hardware or new software. New hardware is expensive. New software, while potentially expensive in its own right, could be implemented by the driver, or better still, by the programmer, using shaders, for instance. In the latter case, both generality and re-usability are served (a very good thing).

Now, given bottomless pockets and time wells, everything could be implemented in hardware, and then almost everyone would be happy :wink:

If someone has a good argument why this feature would NOT be desirable, please reply so we can stop wasting our time in this thread.
Wow, that’s a really great argument: “Desriability”. IHVs should just do what we “desire”.

I desire lots of things from my GPU. I desire a GPU that supports framebuffer read-back in the fragment shader, regardless of how impossible this may be for modern GPU architecture. In fact, I also desire that the framebuffer be readable in the vertex shader.

I’d also like floating-point framebuffers that didn’t have a cost in terms of speed or memory.

Writing shaders is “complicated, inefficient and ‘hacky’” too. I desire an end to that, and a future where I just select from an infinite number of features when I render a mesh.

:rolleyes: :rolleyes:

Just because you “desire” something doesn’t make it a good idea for a hardware API.

Allright, that is the kind of answer I wanted to hear. I thought there was a technical reason, but it apears that there’s simply no demand for it, pretty clear to me now.

I don’t think it should work that way. It certainly shouldn’t be stated that way, unless you want to alienate and possibly irritate your audience.
I know it doesn’t and shouldn’t work that way. I posted that edgy remark because I wanted to hear a clear answer to my original question rather than alternatives (though which are always welcome). And I honestly didn’t want to waste anyone’s time.

Other than that, I completely agree with you Leghorn, Korval, sqrt[-1].

Again, I apologise for those offended and my improper wording.