Translations within one batch

oh. yes. really, they are.

i have to admit i liked the sniper mode in unreal tournament. but then, in the level with the two towers floating in space, i blew so many heads that it gave me nightmares…

and by the way: if you see something like the above through only one of your eyes, you certainly shouldn’t shout that much :wink:

<whisper on> snaaaaiiiiiiipeeeeeeeeeer mooooooooouuuuuuuuud<whisper off>

Eh? You get the same problems with normal FOV.

Tree with foliage made out of more than 1000 triangles.
Draw this at z-near * 10000 (z-near = 10cm, tree 1000m) -> total ugly, horrible z-fighting!

Not to mention intersecting or layered terrain meshes.

It IS a HUGE limitation if you want to create open scenes with a higher range of view.
And that is definitely a huge market because many people are just sick of shoe-box shooter games.

Originally posted by knackered:
those screenshots have one thing in common, can you spot it children?
there are tricks to avoid these problems, like if you’re going to render with a very narrow field of view you should apply a uniform scale to the scene to bring distant objects into a higher precision part of the depth range (seeing as though any z fighting on foreground objects isn’t going to be noticed at that point).

It wouldn’t solve the problem anyway as described, but isn’t it that w-buffering isn’t supported in OGL?

Does OpenGL specify the depth buffering implementation? In any case, I don’t think any modern hardware uses w-buffering, and I think it was removed from D3D (shooting from the hip…could be wrong).

So z-buffering has this problem where most of the precision is piled up by the near plane, depending on how close it is to the eye point. You really don’t effectively address this problem by throwing more bits. What you really want is a floating point depth representation.

But doing the naive thing (mapping 0.0 to zNear) is actually really bad, because floating point values have most of their precision near 0.0, compounding the problem. You really want to map zNear to 1.0 and zFar to 0.0 (effectively, a 1-z buffer). If you do this, 16-bit floats could work well. You’d probably want to dispense with the sign bit and tweak the representation in other ways (e.g. exponent bias).

-Won

PS It should be self-evident (without even considering the overused HDR effect du jour) that 8 bits per color channel is pretty paltry. For one, it prevents you from using a linear representation for color.

You really want to map zNear to 1.0 and zFar to 0.0 (effectively, a 1-z buffer).
Know what’s worse than seeing z-fighting at a distance? Seeing it up close.

I prefer it at a distance.

In any case, this can be “solved” by simply dividing the world up into 2 passes: one for the stuff far away, and one for stuff closer. Obviously, a z-buffer clear would happen between them. And you’d need a clipping plane to guarente that nothing unpleasant happens between the two images. And you couldn’t use the whole range of the forward plane.

Try to draw a object with layered faces (like for trees) at znear*2000 and you definitely get very noticable z-fighting.
with 24bit depth
near = 1 meter
far = 1,000,000 meters

Z = 2000 == ~0.2 meter resolution, i doubt at 2km distance u can notice 20cm distance. true billboarded stuff mightnt look 100% ok, but then again billboards are a hack anyway so what do u expect

Originally posted by holdeWaldfee:
For example in Battlefield2:

I’m curious - what hardware/drivers are you seeing these issues on? I play BF2 quite a bit - and mostly as a sniper and I’ve never noticed the zfighting before. Maybe it’s the detail level I have (I currently use a 5900 - so I have the detail turned down - but I did have it turned up on the w’end and didn’t notice any issues).

That being said the great thing about BF2 is the gameplay (I enjoy it without the eye candy). It has significant numbers of animated object and performs well even on older hardware. I somehow doubt that doing translations within a single batch would be too much of an issue for the BF2 developers.

However, I would think that you could do transformations in the same way that they are done with skeletal animations (through vertex programs). I haven’t ever done this but I would think that it would do what you want.

Originally posted by Korval:
In any case, this can be “solved” by simply dividing the world up into 2 passes: one for the stuff far away, and one for stuff closer. Obviously, a z-buffer clear would happen between them. And you’d need a clipping plane to guarente that nothing unpleasant happens between the two images. And you couldn’t use the whole range of the forward plane.
I often heard this suggestion in forums and I even tried this out in our project.

But what I get is very bad z-fighting at the seam of the two regions.
Did anyone get good results with this?

PS.: No problems with cockpits and stuff, but terrain scenes really seem to be problematic.

Originally posted by zed:
[b]with 24bit depth
near = 1 meter
far = 1,000,000 meters

Z = 2000 == ~0.2 meter resolution, i doubt at 2km distance u can notice 20cm distance. true billboarded stuff mightnt look 100% ok, but then again billboards are a hack anyway so what do u expect [/b]
Z-near can’t be 1 meter. This would cause very bad clipping if the camera gets close to objects. 10-20 cm is the maximum.

Try to make a scene where you “add” water to the terrain so that you have a ground below the water surface.
Now fly above this in a distance of 2 km –> horrible z-fighting between the water and terrain mesh.

Of course, you could always apply horrible LOD hacking stuff to reduce this problem, but than you would have to do a lot of them, because there are MANY different cases where this happens.

Originally posted by rgpc:
I’m curious - what hardware/drivers are you seeing these issues on?
Radeon X800 something. It is driver independent.

I somehow doubt that doing translations within a single batch would be too much of an issue for the BF2 developers.
One tank in BF2 has around ~50 translation matrices.
So that means 50 additional batches per instance (!) just because of the movable parts.

And we want much more detailed vehicles in the future.

However, I would think that you could do transformations in the same way that they are done with skeletal animations (through vertex programs). I haven’t ever done this but I would think that it would do what you want.
Yes, that’s matrix paletting.
I find it quite inefficient to specify the translation matrix for each vertex.
It would be really good to have a straightforward API functionality to handle this problem.

Originally posted by holdeWaldfee:
[quote]Originally posted by knackered:
those screenshots have one thing in common, can you spot it children?
there are tricks to avoid these problems, like if you’re going to render with a very narrow field of view you should apply a uniform scale to the scene to bring distant objects into a higher precision part of the depth range (seeing as though any z fighting on foreground objects isn’t going to be noticed at that point).

It wouldn’t solve the problem anyway as described, but isn’t it that w-buffering isn’t supported in OGL?
[/QUOTE]well the thing is, Mr. HoldeWaldfee, is that it does solve the problem as described, otherwise I would not have wasted my time suggesting it.

Originally posted by knackered:
well the thing is, Mr. HoldeWaldfee, is that it does solve the problem as described, otherwise I would not have wasted my time suggesting it.
The problem happens with a normal FOV too, so this wouldn’t solve it. It might reduce the effect when you have a narrow FOV.

And I thought w-buffering isn’t supported very well under OGL?

w-buffering isn’t supported at all under OpenGL.
The best you can do is squash distant objects into higher precision parts of the zbuffer. Most outdoor scenes don’t have very intricate models in them, such as a teapot, so you can afford to lose some foreground precision.
You basically want to keep your units as metres, but get better precision.

What do you mean with this?
How could I move distant objects closer to z-near?
By doing custom projection stuff?

something like:-

glMatrixMode(GL_PROJECTION);
glPushMatrix();
glScalef(0.1f, 0.1f, 0.1f);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glScalef(0.1f, 0.1f, 0.1f);
glMultMatrixf(m_cameraTransform);
drawScene();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glPopMatrix();

everything gets smaller, and nearer, but your camera also moves slower.

Hmm…

I think this would cause problems if you have a scene with a horizon since you can’t apply this to the terrain.
But I am sure that this would help for a game in space.

not with you, why would it be different for a terrain?
the far clip plane gets scaled inwards too, so everything is still clipped in the same place.
this works fine in my flight simulators, the only problem I came across was z fighting in the cockpit, where I have co-planar polys all over the shop…simple fix for that was to draw the cockpit in a second pass, reversing the depth test.
God in heaven, why don’t you just try it in your application…I take it you’ve got an application to try it in?

I will test this tomorrow.

I can’t figure out how this could help.
What is the point if everything is scaled down?
Wouldn’t the relation between znear and zfar be the same?