z-buffer work around

Not sure. Whatever they are doing on hardware is nothing like what I wrote ages ago on software.

Between the two I would have figured Z would have been treated the same.

The result of the vertex shader or the fixed function modelview-projection transformation is a 4D homogeneous position in clip space.

The hardware then takes three vertices for a triangle and clips it so that X, Y, and Z are between -W and +W. After that you divide X, Y, and Z by W to get normalized device coordinates, each in the range [-1, 1]. Then the viewport transformation maps Z from [-1, 1] to [n, f] where n and f are the parameters to glDepthRange, typically [0, 1].

That’s exactly my problem. It looks like the division by w is not happening for the case I explained in my previous post otherwise I would be seeing much smaller values (unless I draw at unit distance or less away from the camera of course).

As I said above I have no idea what they do on hardware. Never had to divide by W for screen transformation on my manual rasterizer. Simply transformed the object to be inline with the view port in other words moved the objects not the view port. Then divided the x and y by angular value which was determined by what width of view angle you chose.
So for perspective view I could have a value such as .5 dividing all the x and y but if I wanted to change it to 1 then then it would give me ortho. The only thing z was used for at all was determining depth in the z buffer.

So for me no clue at all on that.

I did manage to get the other thing working. Works fine until I add a secondary sphere to the picture and reduce the terrain level to a realistic height. Then I get z-fighting between the two spheres. The primary sphere is easily fixed by dividing in half. But even stepping every 4000 which take minutes not seconds to render a frame does not fix the issue and so I will need to figure out a better way such as faking the perspective. Which so far early test indicate it will work just requires a lot more coding.

Then you haven’t used perspective projection at all, you’ve used orthographic projection with a different scale factor.

For perspective projection you must divide by z (or w if you use homogenous coordinates).

Sorry, your right it was also divided by z but not w is what meant.
The .5 was for the width of angle you wanted to bring in. The Z in the ortho was only used to get proper depth.

Wanted to say thanks again to those who helped out and bore the brunt of me being an #$$. Much appreciated and again sorry.

I did manage to get the concept working where it draws the back first then the near part of the object. This works very good on a single object to an extent it is still very limited.
The more you have over lapping surfaces or lawyers you have the less well it works and the closer those layers run the less it works as well.

Both of which means subdividing the object from back to front even more in thinner slices. The problem with that is something the size of a planet in scale value would take far to many calls to make a decent frame rate.

I did learn something from this the ability to change perspective is incredibly fast can even happen in the same frame and has no flicker effect.

I thought of a couple possible solutions. Change the perspective on the object over movement distances. The object can easily be scaled so as the perspective change is never noticed as it approaches. Only once the object reach very close up will the scale reach full size at which point in time the z fighting for any object will never be notice able because all the near surfaces and usual culling such as frustum and so forth.

This would only require keeping a real position of each object and a scaled position of each object along with scaled size and so on.

Final depth values will always be [0…1], but your actual near and far plane are probably not where you expect them to be from your calls, nor will your depth distribution be linear just because you copied the Z row from glOrtho. glOrtho being linear has more to do with the W row being [0 0 0 1].

Thanks for the replies, guys.

Actually I wasn’t trying to linearize the depth, I already solved that years ago by using


uniform float near;
uniform float far;
void main()
{
	gl_FragDepth = ((1.0f/gl_FragCoord.w)-near)/(far-near);
}

I know this can be optimized further and disables Early-Z but I was more concerned about not having to assign a varying to it.

True. I recalculated what the equivalent near and far values would need to be to get the same 3rd row as the glOrtho call. It appears the far value is negative which might explain the flickering and weird behavior when resizing. However, it appears that the model I’m drawing is still being clipped at the near and far values I specify and not the recalculated near and far values. That’s what confuses me… But since the equivalent far value is negative I’ll just put this issue in the undefined behavior section :slight_smile: Thanks for the help.