Clipping plane and mirror

When you guys do mirrors, you create a clipping plane so that the mirrored objects don’t come out of the mirror, right? When you create a clipping plane, is it relative to the camera, or is it in world space? I would assume that it’s relative to the camera, but I thought I’d ask anyway .

See, I’ve been getting the hang of planes and normals, so much that I’m finding it easy to project points onto planes and mirroring things. Next I’m going to make a function that determines the distance from a point to a line…I’m making a terrain editor so I need a way to select points that are a certain distance away.

Clipping planes aren’t very efficient on current nVIDIA hardware, unfortunately. Thus, you’re probably better off setting up an object space texgen, mapping to a texture with one solid white pixel and one transparent pixel, in MODULATE mode. (Gotta use multitexture for this :slight_smile: Then alpha test.

If your texture is two pixels wide, you set up the dot product of your texgen for S such that the mirror plane falls exactly at the midpoint between the solid and the transparent pixel. Set the texture to GL_NEAREST filter mode, and GL_CLAMP_TO_EDGE wrapping mode, and presto! Instant 100% compatible, efficient, user defined object space clipping plane.

While you can define texgen in eye linear space, that may be less efficient, if the driver would otherwise bypass the eye coordinate generation and go directly to perspective projected space.

I’m trying to imagine what you’re talking about jwatte, but I can’t seem to see it.

What you’re talking about confuses me in the same that that 1D textures confuse me. I just can’t see the relationship between the texture and object.

Originally posted by jwatte:
Clipping planes aren’t very efficient on current nVIDIA hardware

I have personally never used user-defined clip planes but, in the “old” opengl performance faq(nvidia says they are updating it on their site) it says that for every unused texture unit you have, you can get 2 hw accelerated clip planes. Is this still slow/inefficient?

you can get 2 hw accelerated clip planes. Is this still slow/inefficient?

Yeah, it’s awfully slow; i guess it does use a software path, so it just becomes useless with VAR, since the driver has to read back from video memory.

Y.

The way they would do clipping planes with unused texture units would probably be exactly the way I described. However, there’s been some discussion a while back about this not actually happening in many cases. And, evenso, that’s only for nVIDIA. Coding it yourself means it’ll work fast, for sure, on any hardware with that texture unit.

Whatever: do you know how object linear texgen works? Basically, just shoot a ray in some direction through the object, and the texture coordinate generated is fragment-position dot this ray, plus some constant (fourth element of the plane you specify for texgen).

Now, set this dot product up such that it generates the value 0.5 exactly where you want the clipping plane to be. It generates a smaller value where you want to keep geometry, and a larger value where you want to clip geometry. Make your 2x1 texture have an opaque, white texel on the left (texcoord range 0 - 0.5) and a transparent texel on the right (texcoord range 0.5 - 1). Set clamping to CLAMP_TO_EDGE. Make the texture application function MODULATE. Turn on alpha testing.

Presto! Instant portable, efficient, clipping plane. Because textures can be 2D, you can use a texture with a single opaque pixel and three transparent pixels, and set up both S and T texgen for two different planes.

You will note that the formulation of the parameters you give to object linear texgen are identical to those of a plane. If you give the vector (a,b,c,d) to texgen, here’s the output texture coordinate:

out_coord = x * a + y * b + z * c + 1 * d;

Originally posted by jwatte:
The way they would do clipping planes with unused texture units would probably be exactly the way I described. However, there’s been some discussion a while back about this not actually happening in many cases. And, evenso, that’s only for nVIDIA. Coding it yourself means it’ll work fast, for sure, on any hardware with that texture unit.

Ahh, so that is how they do it. I never understood what this kind of clipping had to do with texture units. Thanx jwatte.

I guess you could use EYE_LINEAR to define clip planes in eye space,no?

The only problem I see with this technique though, besides occupying a texture unit, is that it might consume some fill rate. Cause primitives passing the frustum clipping will need to be rasterized to check for these user-defined planes, or am I mistaken? One might argue though that if the user-defined clip plane is placed within the frustum, primitives are already rasterized, so clipping is almost “free”.

Maybe it would be a good idea combining this alpha-approach with some simple trivial-rejection tests.

If you use a 2x1 or 2x2 texture, I really hope it sticks in texture cache for the duration of your object. It may introduce some slowdown in fragment processing, though, if the internal hardware actually multi-passes to do multi-texturing.

You should never throw geometry that you know to be outside the frustum at the hardware, if you care about performance. If you do, the guard band clipping will take care of it before it gets to fragment coloring, anyway, so this technique will only “see” fragments that are within the frustum.

I have to insist that the best method is through modification of the projection matrix. Last time I suggested it (it’s in the archive) the poster said it didn’t work properly, he was arrogant in his reply and I didn’t bother arguing.
I’ve been using it extensively for months and have tested it very thoroughly, unless you need particularly high precision depth tests at grazing angles it’s absolutely perfect.