Depth-sorting translucent parts - question

Hi everyone.
I am writing a graphics engine and came across a following problem: rendering transparency correctly. Now, I know you’re supposed to render solid geometry first, disable depth write, depth sort the transparent polys. Here’s a catch:
In my engine objects are represented as interleaved vertex arrays (2uv + 3normal + 3xyz / faces v0,v1,v2). When I draw an object I do rotation/transform and then feed the array.
If I were to depth sort transparent parts of an object (assuming it has different textures applied to different parts), I would have to apply the same transformations to those faces in sofware to find out their coordinates/distance from the camera.
Example:
Object SOMETHING {
Solid triangles[]: (1,2,3),(2,3,4)
Transparent triangles[]: (4,5,6),(1,3,6)
Verts: (u,v,normalX, normalY, normalZ, x,y,z)…
Position 10,20,30
Rotation 180,30,50 (about object’s 0,0,0)
}
Lets say I called glTransform, glRotate and rendered the solid part.
Now, if I want to sort my transparent triangles by distance from the viewer and say, dump them into some kind of array, I would have to first rotate each vertex referenced by those triangles around object’s origin, then shift those verteces by object’s position. This would allow me to have “transformed” vertex positions in order to calculate how far they are from the camera…

This seems terribly counterproductive. Especially if I have to do it each frame. Hardware does all the transform for me, why should I transform translucent parts in software? I hope somebody knows a better approach…

I really appreciate your effort in reading this. Thank you.

-nagual

why not just taking the vector to the triangle directly without the transformation and mesure like this for sorting?

I cant do that because of the way models are represented. Each model is stored in memory in “zero” position, meaning rotation is fixed and all coordinates are centered at the origin. Sort of the way you’d see in in 3D Studio… This way I could have, for example, 2 objects sharing the same model on the screen, each rotated and positioned differently.
In order to do that I store them in “zero” position and then call glRotate and glTransform before I call glDrawElements.
But to be able to get the actual distance from a particular triangle to the viewer I have to have that triangle’s coords transformed according to object’s transform.
By object I mean an entity on a screen, by model I mean the representation of geometry in memory.
I hope this cleared things up.
Thanks again.
-nagual

ok… once again…
why not doing it directly?

or in other words… instead of transforming your MESH into cameraspace to know how far away from the cam the mesh is, transform the CAMERA into MESHSPACE (the space, where your vertex-data is stored in… “identity” you called it…)

that means one simple transformation of 1 pos (cam pos… 1 vector), and then do what you wanna do: sort by depth

Thanks a lot, man
I was slow…
Now I understand what you were trying to say… Great idea.

-nagual

I know, my advice is not that good, because it means that you would have to redesign your whole engine. However:

If you start a new engine, why not use BSP-Trees? With them you sort ALL polys from back to front, therefore you don´t need to use the depth-test (about 20% - 50% faster!), you can cull whole subtrees and you don´t have any problems with transparency, because you draw everything from back to front.

Jan.

Originally posted by Jan2000:
therefore you don´t need to use the depth-test (about 20% - 50% faster!), you can cull whole subtrees and you don´t have any problems with transparency, because you draw everything from back to front.

But you forget one thing: with the newest generation 3d cards you shouldn’t draw from back to front but front to back

I won’t have to redesign my whole engine. This part is in development right now, I was trying to figure that out. About BSP - trees: I dont have any kind of map or level concept just yet. I am more in favor of Octrees. At any rate I will still need to do z-sorting because models/objects are dynamic and “level” is static. Objects can’t be turned into bsp since they’re to be animated etc…

Thanks for advice though.

-nagual

Yes, you are right, BSPs are for static geometry. BUT you can combine BSP-Trees and the depth test. Set the depth test to GL_ALWAYS, so that the depth-values are written, but not compared and then draw your level.
When you draw dynamic objects just turn depth-testing on (GL_LESS) and render your objects. This is still faster than rendering everything with depth-test enabled.

I also heard that front to back might be better than back to front. So it tested it. With back to front and no depth-buffer i had a framerate of around 800 (not so many polys). With front to back and depth-testing i had around 400 FPS! With back to front and depth-buffer set to GL_ALWAYS i had a framerate around 600.

I use a Geforce 2 TI, maybe with a GF3 or 4 it is better to draw front to back, but i don´t think so.

Jan.