I am a student and for a project I have to implement back face culling. I understand the theory, but don’t understand how GL knows which way a polygon is facing. To make matters worse, I’m not allowed to directly use OpenGL, but must manually code it myself. If anyone knows how to go about this process, I would be VERY grateful!

It is actually a simple problem to solve with a little math. Compute a surface normal vector to the face and compute the dot product of the resulting normal with your look vector (from the camera to the face). If the dot product is negative, the face is facing away from the camera, if positive it is facing the camera.

Dot product, gotcha. Thanks for that.

If you have the coordinates for the triangle in screen-space, it should be even simpler:

Triangle points (set Z = 0): p1, p2, p3

Let V = (p2-p1) X (p3-p1)

If Z-component of V > 0, triangle is drawn counter-clock-wise.

If Z-component of V < 0, triangle is drawn clock-wise.

This can be simplified to:

facing = (p2.x-p1.x)*(p3.y-p1.y) - (p2.y-p1.y)*(p3.x-p1.x)

if facing > 0

counter-clock-wise

else

clock-wise

end if

Compute a surface normal vector to the face and compute the dot product of the resulting normal with your look vector (from the camera to the face).

I did this too, but it has not the exact result you want. If you use a “Look-vector” than that code would cull the poly if you look away from the poly. However in perspective mode you can still see such polys. So your code would cull the face a bit too early and it would just pop away and you can see this.

Use instead a positionvector of the camera.

Compute the dot-product with the normal of your face and one vertex of the poly (it doesn´t matter which one). This gives you a number, the distance of the polys plane to the origin. You have to do this only once because the poly doesn´t move.

Then every frame you compute the dot-product with the faces normal and your camera-position. This, again, gives you a distance. If this distance is smaller then the distance mentioned above, you are on the back side of the poly. Else you are on the front.

It´s really easy, and this code works absolutely perfect. I use it by myself.

Good luck and have fun.

Jan.

Originally posted by Jan2000:

If you use a “Look-vector” than that code would cull the poly if you look away from the poly. However in perspective mode you can still see such polys. So your code would cull the face a bit too early and it would just pop away and you can see this.

Really? I thought perspective was limited to +/- 90 degrees. But then, maybe I’m confusing that with light or something.

Originally posted by Jan2000:

[b]Compute the dot-product with the normal of your face and one vertex of the poly (it doesn´t matter which one). This gives you a number, the distance of the polys plane to the origin. You have to do this only once because the poly doesn´t move.

Then every frame you compute the dot-product with the faces normal and your camera-position. This, again, gives you a distance. If this distance is smaller then the distance mentioned above, you are on the back side of the poly. Else you are on the front.[/b]

What happens if the poly moves, the camera changes direction or the camera moves?

[This message has been edited by Furrage (edited 02-26-2002).]

Originally posted by Jan2000:

I did this too, but it has not the exact result you want. If you use a “Look-vector” than that code would cull the poly if you look away from the poly. However in perspective mode you can still see such polys. So your code would cull the face a bit too early and it would just pop away and you can see this.

This is true. Here’s an example:

`/ A / / / / / ^ B / / | / | / | / | / *`

As you can see, A and B are parallel but B is visible and A is not. If you dot the normals of A and B with the camera’s look vector, B will be classified as back-facing. That is wrong.

The dot product can be made to work if you use the correct view vector.

http://www.sbdev.pwp.blueyonder.co.uk/tutorials/tut4.htm

This explains in detail why it works:

http://www.cee.hw.ac.uk/~ian/hyper00/polypipe/bfcull.html

The other way is to test face winding after projection in 2D screen space using the sign of the cross product of the edge vectors, which is more common in hardware AFAIK. Looks like that’s just old fashioned especially for software folks in pre projection land.

Thanks to all those who have contributed to this topic, it has been a great help. I still have much to do, so the implementation will have to wait for a while.

Gobbo