I have been coding my own 3d engine which currently reads lightwave 6 object and scene files. After finishing the basic framework supporting opaque surface with or without textures, I was going to implement transparent textures. However I did stumple into some difficulties with the rendering pipeline.
I know that to get correct results from rendering I should first draw the opaque objects and surfaces and after that with z-buffer write off the transparent faces sorted from back to front. What I’m really
worried about is the speed hit if I have to change the binding of textures often when I’m rendering a scene.
The problem is not so severe with just ordinary lightwave scenes, since I can easily just say to anyone who is doing graphics that transparent objects inside transparent objects are a bad thing and
this should remove most of the textures binding changes in a scene. However I was thinking about implementing a particle system and this is a bigger problem, since I can’t control the locations of the particles in any way. So the particles can be inside of transparent objects and this is
actually wanted in some cases. (Think about a partly transparent sphere with fire on the bottom and the smoke rising from an opening on top of it.)
One way of doing this would be to get the nearest z-value of and object to the camera and use this as a sorting value for the transparent objects list, but this would cause the particles that are inside the object to be rendered behind the back surface of the object and cause some visual artifacts. I really have to implement this approach, since I think it could be acceptable in some specific cases. However I really would like a more generic solution to this problem.
So I was wondering how someone might have solved this problem in more elegant way or some pointers to how big a speed hit changing textures and the blending function have in cases where there could easily be a lot of transparent objects.
Particle texture images are usually pretty small - how about putting all of them into a single texture? No binding costs to worry about then, you can sort however you like.
Yes, storing all the particle textures in a single texture will help the binding problem a lot, especially in cases where the particle can not be inside an object. There is still a problem with particles that are inside an objects bounding box. There would still be possibly quite a lot of texture bind changes between a particle texture and an objects face texture, if the object faces and particles inside the object were just z-sorted.
What I was thinking that I would first need to check if a particle is behind, inside or front of the object with some algorithm (any pointers about suitably simple and fast that could handle this?) and sort the object into back facing and front facing. Then I could draw the object and all particles that are inside the objects bounding box like this:
Particles that are behind the object.
Object faces that are back facing.
Particles that are inside the object.
Object faces that are front facing.
Particles that are outside the object.
With this rendering pipeline, I would need an algorithm for checking if an particle is inside an object or not. If someone has ideas how to do this efficiently I would appreciate it, otherwise it’s back to reading comp.graphics.algorithms fag and the Principles and practices of computer graphics.
The only way I know to do that perfectly (or nearly so) would be to test which side the test point is on of each nonredundant plane of the polygons of the object (assuming simple convex). Sort of like collision detection. To do a pixel perfect check, hmm, I suppose something could be done with the stencil buffer, but it wouldn’t be fast.
[This message has been edited by DFrey (edited 07-11-2000).]
One idea that I had was in the case of simple convex was to subdivide the bounding box into smaller cubicles and use this as a look-up table to check if the particle coordinate was inside the object or not. Since I don’t need really accurate checking, just to get it reasonable close to good visual effect.
Another thing that I noticed when thinking about the transparent object inside another transparent object was that I could use hierarchial model where I would first render the backfaces of the parent object then the child object and after that the parent objects front faces. This should give accurate results for objects that are convex and not overlapping.
i had this problem too, i sovled it this easy way:
at first I render the opaque triangles:
the I set glFrontFace(GL_CW); and render the backfaces of my transparent objects with z-buffer write off, then I set glFrontFace(GL_CCW); and render the frontfaces with z-buffer wirte on. In this way I can use gl_triangle_strip to optimize
my opbjects and it looks well (it’s an engine for a game).