>>>I can think of a more simple example! An opaque ship in translucent water: you always need to draw the ship first right? Well threre you go. Now what if you have a big dynamic system and there objects that could be in the water, completely submereged or not submerged at all. And if parts of the objects are blended… It rapidly becomes a headache.<<<
If you render the opaque objects first, then render the ordered transparent polygons, it will work OK. If parts of an object is transparent, then you need to “separate” that part.
This problem is not as nasty as shadows, per pixel lighting, realistic fog, explosions, etc.
Still, I prefer having the hardware handle it as long as it is fast enough and doesn’t double the price of the hardware.
I think we are now in position where transparency sorting is becoming a hardware issue (e.g. NVIDIAs methods with their Gefore3 cards or the Kyro cards).
Therefore the transparency should have a hardware/software abstraction just like everything else. If hardware vendors want to have 2 Z-buffers (or whatever) then let them do it but an abstraction needs to be in the API first so we don’t end up in an EXT_mess.
Nvidias order independent transparency demo is just a variation of this though, remember, both are multi pass. If you read the white paper by Cass it explains this quite wqell, and there are references to research papers on this subject. So it’ll still eat performance, probably more than sorting everything yourself for simple scenes.
As far as I understood, all hardware algorithms are reducing performance twice. Oh, that’s not a bad idea, to have independert order transparency, but now it’s better keep sorting by software. Half FPS, this is not the price I gonna pay for that stuff.