Heightmap Texture + Vertex Shader = slow

Hi folks:

I am researching ocean surface modeling, and my current task is optimizing the rendering. Moving things onto the graphics card seemed like a good idea – but that didn’t turn out so well. I am wondering if I’m barking up the wrong tree on this, and seek your wisdom.

I am trying to undulate the vertices of a mesh (stored in a display list) according to height values in an array. I loaded the height values into a texture, and mapped that texture onto the mesh. All was well, and I got about 30fps. (The colour texture was, of course, fairly meaningless, as it was height/normal data interpreted as RGBA, but at least it was mapping correctly.)

When I wrote a vertex shader to (instead of using the data as a colour texture) modify the y coordinates of the points, my frame rate dropped to about 2fps.

My question to you: Is this to be expected, or am I doing something horribly wrong? I am running the vertex shader on 2^16 vertices, but the fact that the graphics card can do that no problem when applying a colour texture leads me to believe that something is amiss.

Here is the shader code:

uniform sampler2D texture;

void main () {
    vec4 v = vec4(gl_Vertex);
    v.y = texture2D(texture,gl_MultiTexCoord0.xy).x * 5.;
    gl_Position = gl_ModelViewProjectionMatrix * v;

I’m new at this, and not entirely sure what is important. I’ll be happy to include anything else you need for a diagnosis.

Many thanks,


I assume when you first mapped the texture onto the mesh, you were retrieving the values in the fragment shader. I’ve found that many implementations don’t allow texture fetching from anything other than the fragment shader, otherwise it will fall back to software mode.

Thanks HexCat.

For the first mapping, I allowed the default pipeline to take over. I understand that cards are highly optimized for this, but I had hoped the roll-my-own-shader hit would not be that dramatic.

Your software-mode hypothesis sounds pretty convincing. Does this mean that using a vertex shader to undulate a surface is just infeasible? Might there be a better way to do it that does not rely on textures?

You might want to take a look at this paper. It documents which formats are supported in the vertex shader for NVIDIA cards, and perhaps others. :slight_smile:

as far as I know, there’s only 2 formats supported - GL_LUMINANCE_FLOAT32_ATI and GL_RGBA_FLOAT32_ATI.

Yes, you can only use 1-component or 4-component textures.
You cannot use FLOAT16 format - only FLOAT32 is supported.
You cannot use other filtering than GL_NEAREST.

Vertex textures require GeForce 6 to work.
They are not supported on corresponding ATI hardware. Render to vertex buffer should be used instead.
Modern Radeon’s (these with Shader Model 4.0) can do it, but I don’t know if drivers expose this. They probably do.

Thanks k_szczech, knackered and HexCat.

Using GL_RGBA_FLOAT32_ATI as the internal format sped things up, without hyperbole, by a factor of 50+.


Just recently, I added VTF info to the Wiki

Also, you should move up to using GL_ARB_texture_float


ATI hardware work to! Since 8.5 drivers, this month drivers!
Probably until X800 hardware.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.