Multipass rendering and z-fighting


In the beginner’s forum, I discovered that I am having a z-fighting problem, and it was suggested that multipass rendering is one way to solve this. However, no one was able to tell me how to do a multipass render, so I thought I would ask this forum, as this may be considered an advanced topic.

I have a near-field object (less than 5 units from the camera), and a far-field object over 30 million units away. Obviously, a 32-bit depth buffer is not enough to accurately resolve the far field image, resulting in rendering artifacts on the far object due to z fighting. This means that I cannot simply change the near clipping plane to fix this because then my near-field object is not visible.

So, I’m guessing that I have to make a far field viewing frustum, render the object, make a near-field viewing frustrum, render the object, then swap buffers.

Would this be the way the code should look?


Is this correct? Am I missing any major steps? Or am I completely off base? Any help would be appreciated…

If you are drawing objects with that large a range, chances are that objects are not contiguous. You can split your objects up into chunks with non-overlapping Z values with simple bounding box checks, and then assign each chunk its own near and far planes, and its own slice of the depth buffer (using glDepthRange).

You can do similar tricks to reduce z-fighting if you have objects over the entire range, but each object that crosses a boundary must be rendered in both ranges that it intersects.

I don’t think multipass rendering in the traditional sense of the term would help you. Traditionally, multipass rendering is when you do multiple passes over a single triangle to do more advanced shading. Z-fighting can be a problem when you do this if you don’t use the exact same vertices for each pass, or if you mix fixed-function with programmable vertex pipelines on some GeForce configurations.

Thank you for the reply!

You are correct, the objects are far from contiguous – it is all empty space between them.

Please forgive my ignorance, but would you happen to have a short pseudocode example of how I would implement this? I have never used glDepthRange() range before. Based on what you said, I am thinking it would look something like this:

// Draw far away object //

MoveToEarthPosition(); // VERY far away

// Draw near field object //

MoveToSatPosition(); // pretty close


I have no clue if this is what I need to do. Please Help!

Another method, that I have used in the past, is to set your far plane at a set value (say 10^6 or what ever) and scale any object whose centres are past the far plane. This way you don’t have to fiddle with depth values etc. and you can maintain a decent level of accuracy in your depth buffer.

Which leads to another question, what depth are you requesting for your depth buffer? What hardware are you using?

I have already tried the scaling approach, and is my interim fix if I can’t get this to work. For the most part it looks OK, but there is a slight differece in rendering. (For example, a 1 m object viewed from a distance of 30m looks the same as a 1km object viewed from 30 km, and I shrink the size of the Earth and the distance to the Earth by a factor of 1000 – the Earth looks about the same, but I can notice some small differences in the relative position of the satellite against the backdrop of the Earth’s image.)

Unfortunately, the scaling approach doesn’t work if I want to draw, say the Space Shuttle approaching the ISS – if both objects are 1 km in size instead of the correct sizes, the relative spacing between those two objects (say a separation distance of 100m) would be completely swamped by the “scaled” objects - they would look like they ran into each other even though they are still separated in actuality. In addition, orbital motion is dependent on actual, not scaled distances, and messing with these distances means my physics engine doesn’t animate realistically. The goal is to let folks “fly the shuttle” using the actual physics involved so they can see how hard it really is to dock with the ISS, and maybe even make a game out of it.

So the depth I am requesting ranges from 6.5 million to about 45 million for the far-field objects (i.e. the texture-mapped planets), and the near-field objects are as close as 1 unit from the camera. There is nothing in between, really. This is why splitting the depth buffer into two separate renders looks like it will work, if I can get a simple example of how to actually implement rendering into two separate depth buffer ranges.

As far as hardware, I am using an nVIdia GeForce 2 with 32 MB VRAM. But I would like this to run on a wide variety of hardware, so I need to come up with an approach that will run on the majority of hardware currently deployed today.

Not sure what differences you would be seeing when you scale your objects.

Remember you are only scaling your objects for display if they are outside your far plane (ie. at display) not shrinking everything to make it fit inside a smaller frustum. The objects themselves remain at their normal sizes unless they are outside the far plane (I don’t understand why you would have the spaceshuttle at 1km in size? Objects appear smaller than they actually are, not bigger).

I have attached a snippit of code where I use this method below (I wrote this more than 12 months ago so don’t expect me to explain it too much - probably should have put comments in there…)

The important bits (that I sort of remember) are that my far plane is set at 10,000.0f and I am limiting the objects to 9,000.0f (ie if they are outside of 9,000 then I will scale them and draw the scaled object at 9,000.0f from the viewpoint).

The temporary scale of the object is equal to the actual scale of the object * 9,000.0f / the Actual distance to the object. The new position of the object is equal to the actual position of the object * 9,000.0f / the actual distance of the object (ie. The normalized vector to the object, from the view point * 9,000.0f).

My planetoids were sized from 50,000.0f to 100,000.0f and from memory it worked very well, even though the smallest planetoid was bigger than my frustum. You will need to look into object sorting etc to ensure your near objects don’t accidentally disappear into your planetoids etc.

And remember, most planetary bodies will be nothing more than a single pixel when you are in space and close to earth and to make them visible at all you will need to draw them as a single pixel (ie. Rasterization of very small polygons relies on the polygon crossing the centre of a pixel).

void rgSolarSystem: raw(void)
int nCount;
int nCopy;
int nTop = 0;
float fTemp;
float fDist;
static float fZ = -100.0f;
rgfMenuFont.nHeight = rgfMenuFont.nWidth = gaVariables[GL_SCREENWIDTH].nValue / 60.0f;
rgfMenuFont.nSpacing = -rgfMenuFont.nHeight/3.0f;

glTranslatef(500.0f, 200.0f, fZ);
if (keys[VK_ADD])
fZ += 50.0f;
if (keys[VK_SUBTRACT])
fZ -= 50.0f;
if (keys[VK_UP])
fZ += 5000.0f;
if (keys[VK_DOWN])
fZ -= 5000.0f;
if (gaVariables[GL_LIGHTS].bValue)

glBindTexture(GL_TEXTURE_2D, gnSolTextureID);
for (nCount = 0; nCount < 4; nCount++)
for (nCopy = 0; nCopy < pPlanetoids[nCount]->nCopies; nCopy++)
// Get the current display matrix.
glGetFloatv(GL_MODELVIEW_MATRIX, &(matTemp.element[0][0]));

  	memset(&vPos, 0, sizeof(VERTEX));
  	MatrixMultiply(&matTemp, &vPos, &vPos);
  	fDist = Modulus(&vPos);
  	if (fDist > 9000.0f)
  		fTemp = (pPlanetoids[nCount]->pCopies[nCopy].fScale * 9000.0f) / fDist;
  		vPos.x = (9000.0f * vPos.x) / fDist;
  		vPos.y = (9000.0f * vPos.y) / fDist;
  		vPos.z = (9000.0f * vPos.z) / fDist;
  		glTranslatef(vPos.x, vPos.y, vPos.z);
  		fTemp = pPlanetoids[nCount]->pCopies[nCopy].fScale; // / 100000.0f;
  	glBindTexture(GL_TEXTURE_2D, pPlanetoids[nCount]->nTextureID);
  	if (fTemp < 1.5f)
  		glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
  		glNormal3f(0.0f, 0.0f, 1.0f);
  		glVertex3f(0.0f, 0.0f, 0.0f);
  		if (gaVariables[GL_LIGHTS].bValue)
  		glRotatef(90.0f, 1.0f, 0.0f, 0.0f);
  		gluSphere(gpTesselation, fTemp, 36, 18);
  	nTop += rgfMenuFont.nHeight;


} // End of rgSolarSystem: raw()


Thanks for the response! It sounds like I am already doing close to the same thing you are suggesting, but there appears to be some slight differences in our approach that I will try to implement.

The scaling I have been using looks great for things far away, but where I am noticing some inconsistencies with the scaling approach is when I am in low-earth orbit. (where the satellite is small compared to the backdrop of the Earth). If I set the viewpoint such that the satellite lies on the horizon of the earth in the “true” rendering (with artifacts), when I apply scaling, the center of the satellite no longer lies on the horizon, but slightly above it. A minor issue, true, but I’m a purist

Thanks for your example! I’ll try it out in the next couple of days and let you know how it worked out.


That is indeed pretty much what your code should look like. I’ve never actually done this since I’ve never done space stuff, but I think Derek Smart does this in his battlecruiser games. If you look into the algebra of the projection matrices and how depth values get generated you can pretty much calculate the ideal ranges to use, but arbitrary ranges should work just as well for most cases if you don’t want to mess with the math.

glDepthRange is really, really easy to use. Normally, it takes whatever depth value you get from transforming your geometry and directly plops it in the depth buffer. With depth range, it just rescales so that 0 becomes whatever you set as near range and 1 becomes whatever you set as far range. This means that you are guaranteed proper sorting between objects, and you don’t have that huge void using up space in your depth buffer. In short, you get the best of both worlds.

I’m a bit leary of scaling objects, since it seems like you could get really odd interference between objects that overlap each other unless you are really careful. Using separate ranges avoids this problem, and scales more nicely to an increased number of objects.

Originally posted by Namwob:
satellite no longer lies on the horizon, but slightly above it.

Just call it space warping.

The important thing is that when you get the scaling spot on (that can be REAL fun…) you don’t notice the difference between when an object is being scaled and when it isn’t. But as Coriolis said you may need to play with depth ranges to fix sorting issues.

rgpc, Coriolis,

Thanks for both your suggestions. I’m probably going to try Coriolis’ suggestion first with doing two seperate depth ranges (since he confirmed the pseudocode I wrote looks pretty much correct), and if that doesn’t work I’ll try the scaling approach rgpc had (I’m kind of worried that it will be a lot of work getting the scaling right, and I’m kind of lazy).

I probably won’t be able to work on this until the weekend (since this is my “hobby” not my job), but I’ll let you know in this forum how it turned out (plus I still might need help) in about a week.

Thanks again!

I tried Coriolis’ suggestion, and it worked! I got rid of the z-fighting problem!

However, there is a side effect. I also draw the orbital paths of the objects. When I used just the one depth buffer, the orbital path was hidden when it went behind the Earth. Now, its like depth testing is turned off – the portion of the orbital path that should be hidden is no longer hidden (It is drawn with a GL_LINELOOP as opposed to polygons).

Here is the general sequence of events I am using:

if (distance > 1.0E6)


I’m guessing that if I drew the orbit before I set the clipping planes back to near, this might solve the problem (becuase the Equator and Prime Meridian lines are hidden properly).

However, it’s not that simple because I have multiple viewpoints from which to draw the orbit. For example, I need to draw the satellites orbital path when my view origin is the satellite, when the origin is the Earth, and when the origin is the Sun, depending on the user’s chosen viewpoint.

I use this actual sequence of Events


Where the code to change the depth range is in the DrawBody() function – (i.e. if the body being drawn, like the Earth, is more than 1 million units from the origin, I draw it using far-field clipping planes.)

It seems like becuase I am resetting the depth range after drawing the Earth, when I draw the satellite’s orbit later, it doesn’t check to see if the Earth hides part of the orbital path. I have tried setting the depth buffer in the same way that I do when drawing the body, but this either results in the Earth hiding al of the orbit, or none of the orbit, depending on whether I set it ti near or far clipping ranges. In other words, it seems like the depth buffer is cleared when I change the clipping planes.

If I could somehow save the far depth buffer and reload THE SAME ONE for drawing the orbit later, this might solve the problem.

Does anyone have any ideas on how to do this?