Single pixel cracking issue

I’m working with a terrain height map. It’s a 2D grid that I subdivide into powers of two based on slopes. High slope areas have finer subdivisions. Low slope areas have coarser subdivisions. I convert these subdivisions into quad lists.

Of course, when I have edges of a larger quad next to two (or more) smaller quads, I have to adjust the smaller quad vertexes to avoid cracking. This works great, except that occasionally I get single pixel holes right where I have fixed cracks.

You would think my math is off when I compute what the Z value for a smaller quad should be to fix the crack, but I’ve looked closely and it seems very precise. I thought it was floating point precision issues, so I tried using double precision, but no change at all. I still get the same number of little holes.

The holes are always only a single pixel and only on the edges that I have fixed, though not necessarily at the vertex that I fixed. It’s as if my edge equation doesn’t match what OpenGL is doing (I’m running on a NVidia card). But I would think the edge equation is really simple. I just do a simple linear interpolation of height values.

What could I be missing? Does OpenGL do something where a simple linear interpolation would cause my edges (for the smaller quads) to not line up with the edges it creates (for the adjoining larger quad)?

Is there a way to get around this, using overlap of adjoining quads, or something?

Thanks.

  1. Graphics cards do not support doubles internally, so no wonder you don’t see any difference.

  2. Such cracks can occur with very small differences in floating point values. The most reliable way to prevent such issues is to use a scheme like this:
    http://www.gamasutra.com/view/feature/1754/binary_triangle_trees_for_terrain_.php

EDIT: Look closely at figure 3.

Here all T-junctions have been eliminated, which prevents differences in precision between CPU and GPU.

Hope that helps,
Jan.

Thanks. Yes, I can see how avoiding T-junctions (as that example shows) will fix the problem of slight differences in floating point precision between the CPU and GPU. That makes sense, so thanks.

I also found that if I turn on multi-sampling, even just a little, it largely gets rid of the problem. Of course, this slows things down a little, but it also gives me anti-aliasing, so the scene looks better in general.

>>will fix the problem of slight differences in floating point precision between the CPU and GPU

Even if both are done on the GPU u will see the same problem

Try drawing the same model once with

gl_Position = gl_ModelViewProjectionMatrix * gl_vertex;

  • once with

gl_Position = ftransform();

u will see zfighting

games have shipped with T-junction problems eg quake3 (see the curved doorways)
so it perhaps is not a gamebreaker.

Another solution (which Ive heard ppl use but is slower + more messy) if u done want to tessleate the larger triangle’s edge to match the smaller one, is create a slight skirting between the patches, that hide the problem somewhat.
hmm perhaps even easier just draw a single quad under the terrain (in the right colors/textures) this way if cracks appear, viewers will see something similar (thus the error will be a lot less glaring) instead of say the skybox shining through.

Ah, that’s interesting that even the GPU can have this problem. I hadn’t thought of that, but of course walking down an edge will necessarily have floating point precision errors that accumulate with each scan line.

On your suggestion of drawing a quad under the terrain, that won’t work in my application. I am not doing a game. My application is scientific visualization and the “terrain” isn’t a normal terrain. It’s the surface of a measured part and the user is free to look under the surface as well as look at it from the top.

We’ve actually lived with this problem for some time as it isn’t very noticeable, but it’s one of those things where it bugs you and you want to finally fix it. The multi-sample solution seems to work for now as it doesn’t slow things down too much, at least on the level of boards we use.

The tessellation of the larger edge will be a better long term approach for us, but I don’t have the time to implement it right now.

When the GPU rasterizes two adjacent triangles, that use EXACTLY THE SAME vertices on an edge, then you are guaranteed not to get such pixel errors.

To accomplish this, you need to share the vertices and render your triangles indexed, such that you can reuse the same vertices several times.

When you simply upload the same vertex twice, once for triangle A once for triangle B, you usually get crack-free rasterization, as well, but some years ago i read, that that must not be the case. I never encountered such problems, though. Probably it isn’t an issue on today’s hardware.

Jan.

BTW: As far as i know, rasterization of pixels is done using integer-only algorithms, so at that stage floating-point precision should not be involved.

Whether rasterization is done in floating point or integer is irrelevant since both have precision issues. What matters is how many bits are used to represent the fractional step that must be made on each raster line as it moves to the end vertex. Each step will have some loss in precision and it’s only a question of do you have sufficient bits that the loss is insignificant (or not) by the time you reach the end vertex.

I sure hope one doesn’t have to do indexing to achieve crack-free rasterization and that the problem no longer exists on today’s hardware. Anybody know about that?