I read a lot about depth-sorting of polygons before rendering. This is especially needed with some blending modes. Here is my question: Do you perform all your transformations by yourself with own code? Is there any possibility to get the coordinates transformed back from OGL (I think the OpenGL transformation routines are better optimized than mine will ever be. But what about graphicboards with an own transformation processor…)?

Thanks for any reply.


Yes, you have to do the transformations yourself. Don’t try to use OpenGL as a math library, it’s not what it was designed for and it’ll kill your performance.

The easiest way is to do all your own transformations. You could of course use OpenGL’s feedback buffer to transform and get the z depths of the vertices, but I don’t know how that will perform with very many polygons. For video cards that support hardware transforms, you may still be able to leverage that hardware if the all of your transparent polygons are stored in a presorted manner, like a bsp tree for instance. But any transparent polygons that move with respect to the world will likely force you to depth sort all transparent polygons in real time.

[This message has been edited by DFrey (edited 07-12-2000).]

What I was thinking about doing was to use the openGL transformations and store the current modelview matrix for all objects that have transparent faces and for the particles the pure camera matrix. This way I could use the possible t&l engine of a 3d-accelerator for opaque objects and only rotate manually the transparent entities with a matrix that has been created by openGL.


Interesting approach, I’d suggest you try both approaches and see which you feel is better for your needs.