I’m implementing a hybrid software/OpenGL renderer currently (a raytracer of sorts), and am currently struggling to replicate OpenGL’s perspective projection behaviour, which I intend to use to project the bounding boxes of objects into image space.
I know my bounding boxes are being formed correctly, and I know I can project and render the majority of them; I know this because the modelview and projection matrices being generated by my algorithm and by OpenGL are essentially identical (retrieving OGL’s ones with glGetDoublev), and when the results are rendered, the images produced by software and by OGL line up almost perfectly.
The only trouble is when points go behind the viewer, which - unfortunately - they’ll frequently do, e.g. when using large ground planes or flying the camera close to an object.
OpenGL seems to get things right - the edges of bounding boxes still line up perfectly with the bounded objects, regardless of whether or not one or more vertices are behind the viewer.
I just can’t get it right in software though!
What I’m doing is forming my modelview and projection matrices, then multiplying each vertex by modelview, then by projection, then dividing X and Y by W (homogeneous division). These points are then scaled up to fit the viewport using
The only vertices which don’t line up are those for which W<0. I’ve tried simply negating X and Y for vertices for which W<0; this gets me most of the way there, but these vertices will still ‘drift’ slightly - enough to be noticeable, and far too much to be useful for the intended application.
Does OpenGL really do just a simple homogeneous division? Is there some “magic” one can perform to correct the projection for points behind the viewer?
Take a look at MESA - an open source OpenGL implementation to see how they implemented the OpenGL pipeline.
I already tried to. Couldn’t find the part I was looking for! I may just have to try harder…
I’m not even sure which file to look in, let alone which bit of the file (some of them are quite long).
I’ve also got SGI’s sample implementation source, but haven’t looked at it yet. I suppose that’s next on the agenda…
I got gluProject() to work (I’d tried before, but must have had my arguments wrong or something…), and it’s outputting coords that are just as wrong as mine, so whatever magic OpenGL does, GLU doesn’t.
I had another look at Mesa, but the source is no less confusing than it was first time round. I know the code I’m after must be in there somewhere though, because I just ran my program under it, and it behaved identially to when running on hardware (a good sign, I suppose, but not helpful in and of itself!).
Your problem is that you need to clip in homogenous space before the homogenous divide.
With OpenGL’s projection, points behind the viewer have a negative w.
If you do a homogenous divide of a negative (x,y,z) with a negative w, you get a point in front of the viewer which can be inside the frustum, but actually it should be clipped at the zNear plane.
Read the OpenGL specs chapter 2.12 Clipping
Primitives are clipped to the clip volume. In clip coordinates, the view volume is
−wc <= xc <= wc
−wc <= yc <= wc
−wc <= zc <= wc.
And further down (the interesting part is the wc > 0):
A line segment or polygon whose vertices have wc values of differing signs may generate multiple connected components after clipping. GL implementations are not required to handle this situation. That is, only the portion of the primitive that lies in the region of wc > 0 need be produced by clipping.
Thanks! But (sorry )…
I’ve tried looking up homogeneous clipping, but don’t quite “get” it, and can’t find a decent reference.
Am I right in thinking that basically, after multiplication by the perspective matrix, but before homogeneous division, the conditions
must be met? Is this also sufficient for cases where W<0?
If the conditions are not met, what do I do to the point? Discard it? Set it to some maximum allowable value?
Basically, how does OpenGL generate the points between which a line will be drawn in cases where one of them is clipped? Clearly it can’t just be discarding a clipped point, or the line wouldn’t be rendered at all, let alone appear to be rendered between two correctly positioned points.
In the final app, I won’t be rendering these points directly*, but I will sort the points to find the leftmost, rightmost, topmost and bottom-most, such that I can divide the image plane into rectangular regions and decide which bits do and don’t need rendering. So I need them to be correct in image-space; or, at least, have points for which projection fails clamped to the correct edge.
I’ve supposedly done a lecture course on computer graphics at uni, but when he started talking about drawing lines between points “the wrong way”, I’d already hit the limit of my understanding and couldn’t grasp how the thing managed to draw “inside-out” rather than just try to draw to infinity and crash.
- except for debugging purposes, in which cases OpenGL’s rendering will more than suffice for visualisation.
You must manually clip the primitives at the per-vertex -w to w limits.
You said you have SGI and MESA sources. They must have code for that.
The 3D space of the clipping interval -w to w build a unit cube after the homogenous divide which makes the frustum clip planes equations in clip coordinates simply the unit cube faces.
The part behind the viewer is discarded because the zNear plane is always in front of the viewer, which is always in the origin in that system.
Links to how clipping works can be found if you search for Cohen-Sutherland clipping.
It’s normally explained for 2D using nine regions. For 3D extrude that to six clipping planes partitioning space into 27 regions. It needs six bits to represent the clipping situation of every single vertex. Calculation of the points on the clipping planes is simple linear algebra.
The Mesa code is - to the casual observer - a complete mess, but thanks to the SGI code, I’m getting there. I already understood the idea behind view frustum culling (distinct from clipping), but prior to this thread, didn’t know it could be done with homogeneous coordinates in the manner in question.
I’ve got basic vertex culling working; once you know what the constraints are, it’s trivial. Slightly less trivial is clipping those lines or polygons that intersect with one of the planes, but I’ve dug the necessary algebra for generating the new endpoints out of the SGI code, and am getting there.
Thanks for your help - please bear in mind these posts have been made after a long day grappling with this problem, followed by a sleepless night! I’m not a “newbie” when it comes to 3d graphics, but am largely self-taught; so when I try to implement something armed with incomplete knowledge of the specifics, I tend to find out the hard way.
So near, but yet so far… I’ve got the algorithm up and running; the code’s not very pretty yet, but it seems to be doing what it ought to do - in most cases.
The trouble now is not the algorithm (I believe), but the frustum planes, specifically the near plane. Things clipped by the near plane appear to be cut too short, ending within the visible image, not outside/at the edge of it.
Any handy hints on how to generate the view frustum? I thought/guessed my way through it as best I could, and the left, right, bottom and top planes appear to be correct. I could go trawling through the SGI code again, but I’d prefer not to.
Thanks, but I solved it now it was something more subtle than that, and another sign of my lack of complete understanding. The planes I was forming weren’t quite correct, but I was very much on the right track in that regard.
The problem was that I was calculating distances to clip planes which I’d defined in eye space, using vertices in clip space. I was also then calculating new, clipped line endpoints by interpolating between vertices in clip space!
If you have clip planes defined in eye space, you must calculate distances and perform interpolation using vertices in eye space, then transform the new vertices into clip space to re-check the homogeneous clipping.
The SGI code hints at the possibility of defining clip planes directly in clip space, as it doesn’t keep track of eye-space vertices separately when testing against frustum planes (only against user-defined clip planes - which explains where I got the idea that it wasn’t necessary to keep track of eye-space coords), but I can’t fathom out how (damn function pointers, stopping me being able to find an essential bit of the code). Irritatingly, the plane equations for the view frustum planes seem to be one of very few things you can’t retrieve via glGet!
Given what these points are to be used for, it’ll be sufficient in the final app for me to clip just against the near plane; other clipping can be done directly in image space. So it won’t be a drastic performance hit to have to re-transform a vertex if it gets clipped.
I don’t need to perform colouring, texturing or even rasterisation directly using these coordinates, I just need to know enough about where they are to draw a rectangle in image space that bounds them - if a point after transformation to 2D is outside the image plane, it can just be clamped to the window edge!