I have been coding my own 3d engine which currently reads lightwave 6 object and scene files. After finishing the basic framework supporting opaque surface with or without textures, I was going to implement transparent textures. However I did stumple into some difficulties with the rendering pipeline.
I know that to get correct results from rendering I should first draw the opaque objects and surfaces and after that with z-buffer write off the transparent faces sorted from back to front. What I’m really
worried about is the speed hit if I have to change the binding of textures often when I’m rendering a scene.
The problem is not so severe with just ordinary lightwave scenes, since I can easily just say to anyone who is doing graphics that transparent objects inside transparent objects are a bad thing and
this should remove most of the textures binding changes in a scene. However I was thinking about implementing a particle system and this is a bigger problem, since I can’t control the locations of the particles in any way. So the particles can be inside of transparent objects and this is
actually wanted in some cases. (Think about a partly transparent sphere with fire on the bottom and the smoke rising from an opening on top of it.)
One way of doing this would be to get the nearest z-value of and object to the camera and use this as a sorting value for the transparent objects list, but this would cause the particles that are inside the object to be rendered behind the back surface of the object and cause some visual artifacts. I really have to implement this approach, since I think it could be acceptable in some specific cases. However I really would like a more generic solution to this problem.
So I was wondering how someone might have solved this problem in more elegant way or some pointers to how big a speed hit changing textures and the blending function have in cases where there could easily be a lot of transparent objects.