User Defined Clipping Surfaces

There are 6 standard clipping planes, i.e. the view frustrum. GL allows user-defined additional clipping planes to be specified. (It allows things like cross-sections.)

It would be nice to generalize the planes into bounded and undounded surfaces.

For example, one could have a clipping quad that functions as a clipping plane only where it lies on line of sight to the viewer. To provided a cross-sectional view only where the clip-quad would be visible on screen. WIth current GL this takes 5 of the additional clip planes, and 2 passes.

One could generalize the above and have a clipping bezier surface. The scene is clipped where applicable, to the shape of the bezier surface. Give say a rendering of a working car engine, one could define a parabolic surface, and use the bezier-clip to have a clip surface that allows a broader depth-of-view when the penetration is deeper. The current GL can only give a constant breadth of view invariant of penetration.

As another example, one could have a infinite clipping sinusoidal surface. I don’t have a specific use for this, but one could imagine wanting evenly spaced clipped regions along the side of a scene.

One may want to define a spherical clip-surface used in place of the yon-plane(the far Z clip plane), so that the depth of view is the same at every location on the screen. This would prevent some of the pyscho-visual errors associated with perspective projections. (Part of why some people get sick playing games like Quake or Flight sims).

I’m not sure what you mean with your clip quad example. Isn’t this what the scissor test does?

Fixed function clipping is turning into legacy functionality. You can “clip” to arbitrary volumes by using KIL in fragment programs (use fragment.position).

On lower end hardware there are a few tricks you can pull off with vertex programs, 1D textures and alpha test. There’s also some support for per-fragment clipping in NV20+ (exposed through NV_texture_shader).

The basic idea is to feed an interpolated “distance to clip surface” into the fragment stage, and conditionally discard the fragment based on a >=0 comparison.

For the clip-quad, or other bounded surface, the surface defines a clip region bounded by the surface itself and the view-projection of its perimeter to the hither plane(near clip). It’s like shadow-volumes, except the projection plane is the view-plate(screen surface), and the shadow-volume proper represents the clip volume.

With the quad-example, the quad could be positioned arbitrarily in the 3-D view coordinates. It can be skewed, nonrectangular, even non-planar. A scissor-test is rectangular and in screen coordinates.

can “clip” to arbitrary volumes … in fragment programs

The biggest problem with incorporating the clip-space-description into the fragment program is that the clip-surface description can grow unboundedly. I for one do not want to write bezier clip code(or NURBs) inside a fragment program (let alone write it every time I want to use it), when I can pass the geometric description to this API, and it can do the clipping.

Suppose my application has 32 clip surfaces, of arbitrary configuration (vertex/vertex_arrays surface, beziers, cut-NURBs). The size of the required clip-fragment program is proportional to the size of all the constituent geometry. The geometry could easily require 100K instructions for the fragment program. Additionally, the fragment program can’t adapt dynamically it’s shape, like a call to glGrid can for the beziers.

I agree that ff-clipping is a thing of the past. Though I don’t know where the future lies. I was hoping that this would generate ideas to replace that stage of the GL.

Yes, it can get complex. But you need to do such a general thing with programmable hardware. How else would it work?

ATI had a fixed function clipper for a long time (Radeon, Radeon 8500). It works, but it doesn’t perform … and that’s just for six planes.

Whenever possible the clip surface distance computation should be moved to the vertex program, and interpolated. All the fragment program needs to do then is to read this interpolated distance and KIL. The good thing about KIL is that you don’t need to fiddle around with alpha.

This shift in computational load obviously reduces the frequency of clip distance evaluation, but it also enables software fallbacks.

Trivial example:
Spherical clip surface with center at (0;0;0) (in world space) and radius of one.

TEMP clip_distance;
OPTION ARB_position_invariant;
DP3 clip_distance.x,vertex.position,vertex.position;
RSQ clip_distance.w,clip_distance.x;
MAD result.texcoord[0],clip_distance.w,-clip_distance.x,1.0;
KIL fragment.texcoord[0];
MOV result.color,fragment.color;

Extending it to multiple clip shapes may be tricky. You could select a “closest” clip surface in the vertex program and just pass the distance to that single surface to the interpolator. I’m almost convinced it would work.

chemdog, if you think it’s too complicated to implement in a fragment / vertex program, how much more complicated do you think it would be to implement in hardware? Is that really a good use of the transistor budget?

It is an interesting idea, though…

Have to correct something in my previous post (edit never works here, sorry).

This method and anything similar requires a certain amount of tesselation. At least one vertex must be inside of every clip volume, preferably more. Eg in the case of the spherical clipper, errors will occur if only one vertex is inside the sphere. If no vertex is inside the sphere, nothing will be rendered at all.

Oh well …

You can not do clipping in vertex programs, because a single triangle can become several during clipping. You would be able to do clipping in fragment programs, but fragment.position is in window coordinates not world space.

You can stretch the processing over both of them to attempt trivial examples(i.e. depriving yourself of a texture coordinate to get a clip-coordinate). But as clever as that is, the interpolated value is not accurate to within any tolerance. Triangles that pierce the surface of the sphere are rendered incorrectly, and ones which all three vertices pierce the sphere aren’t rendered at all.

need…programmable hardware. How else would it work?

To clarify the following (large) post, Iw ant to define how I am going to use terms.
[ul][li] Fixed-Function: The functionality is [/li]determined by a fixed set of functions. One may set values, and set input, but the way in which the values and input are used is unchanging.[li] Programmable: The functionality is determined by a program which is executed. The way in which values and input are used is unrestricted, subject to the expressivity of the language. (This is usually more powerful than fixed-function).[] Clipping: Surface based clipping. The spherical clipping example, is still a clip-surface, though it does clip everything outside the sphere. A surface that prevents the rendering of view-obstructed geometry ( that is all the geometry between the surface and hither-plane).[] Volume Clipping: Using the same spherical example, if instead the interior of the sphere were clipped this would be a volume clip. A volume clip would be a region of space that doesn’t not allow rendering to occur. This is a more powerful form of clipping.[/ul][/li]
By these definitions clipping is fixed-function. Under this definition, I think ff-clipping will be here a long time. I would like to expand the set of available primitives.

OpenGL mandates a specific primitive in its ff-clipper: planes. It mandates supporting 12 of them, 6 being the view frustrum. Pair-wise they are the hither-yon, the ford-fare, and the zig-zag(AKA the near-far, left-right, and top-bottom).

In ff-rendering, every point, type of line, type of triangle, and type of quad is rendered using the same vertex transformation pipeline and lighting module. Each is available in a few types: Strips, Lists, and Fans. All of these together, form the set of supported primitives. Every renderable thing is specified in terms of these.

There should be support for planes and triangles in the ff-clipper. Lines and points cannot clip, as they are not 2 dimensional. When projected to the view-plane, they cover zero area and nothing would be clipped. Wide lines (lines with width larger than 0) and large points (points with size greater than 0) could be supported as well.

Every type of surface uses the same inside-outside test. In fact, it is nearly as hard to clip with a plane as it is with a triangle. Planar clipping can be achieved by testing the sign of “V dot N + D”, where N and D are properties of the plane(normal and offset). Triangle reconstruction occurs by determining the line segment of intersection with the plane, and taking the positively valued vertices and joining them with the vertices on the line segment, (which have interpolated values for the other vertex properties based on the shading model).

Clipping to a triangle, begins identically to planar clipping. During triangle reconstruction, the line segment of intersection of the plane is intersected with the clip-triangle. If no intersection occurs the data-triangle is passed whole. If the interection is contained entirely with the triangle, the data-trianfle is clipped as the plane. In the final case, the line segment is intersected to the side of the triangle, and the data triangle is split into two triangles. One which is passed whole, the other intersected to the plane.

I don’t know if the plane-clipping is done in hardware. I assume it is, as otherwise there would be 3 transfers of vertex data between system and hardware. If it is in hardware, triangle-clipping can be done in hardware using about the same amount of circuitry as the plane-clipping, and no more than twice as many clock-ticks. Ideally they would be integrated into the same unit, and then there would be minimal change to the current hardware.

The real problem depends on the amount of triangles and planes used in clipping. Given that the current rendering rate is at 6 clippers, the time spent in this section increases linearly with the number of clippers; using a naive hardware implementation. Better performance can be achieved by using effective space-partitioning, and determining nodes of influence for each clipper. Then the problem grows only as the spatial complexity of the clip-space, and the logarithm of the number of of clippers. It may be the case that software may run faster than hardware for a few years, but this is supposed to be a forward looking forum.

If I wanted to get something incorporated now, I would use the following:
[ul][li] State Variable:Description:Attribute Group:Initial Value:Get Command[] GL_CLIP_TRIANGLEi user clip-triangle coordinates transform (0,0,0,0,0,0,0,0,0) glGetClipTriangle()[] GL_CLIP_TRIANGLEi ith user clipping triangle enabled transform_enable GL_FALSE glIsEnabled()[/ul][/li][ul][li] New Functions[] void glClipTriangle(GLenum target, GLdouble coordinates)[] void glGetClipTriangle(GLenum target, GLdouble coordinates)[/ul][/li]
This only a baby-step toward the functionality I had previously described.

too complicated to implement in a fragment vertex program

I worry more about making errors, than about the complexity of any single conversion. It is a repetitive process, and just because I(or any one person in particular) can do it correctly, vehemently does not mean that any programmer user of openGL can do the same. And even I(or that particular person) will(not might) make a mistake in some conversion sometime. I strongly believe the conversion should be automated (say via an openGL API).

Ideally, I would like to be able to use code like the following.

name = glGenClips(1);
//Geometry calls to form clip-surface
// work
valid = glIsClip(some_name);
//finish work
glDeleteClips(name, 1);

This way I do not have to reinvent the triangle every time I need to make a clip surface.

Volume clipping is a little more complicated. The same inside/outside tests occur, but in world-space instead of view-space. The same mechanism for triangle reconstruction as used in surface clipping can be used here.