Transparency

We currently do accurate transparency using BSP. Is there any more simple way to do it with shaders?

Thanks,

Alberto

You could look for order independent transparency, you can find things like:

http://developer.download.nvidia.com/SDK/10/opengl/src/dual_depth_peeling/doc/DualDepthPeeling.pdf

Thanks, is this what most recent 3D graphics program use for transparency? Do you know any famous 3D graphics program using this approach, we would like to see it in action.

Alberto

Sorry, I don’t know. I have the impression that this kind of approaches are not yet common, and they may still be expensive.

Only recently I learned about the “weighted average” fast approximation, I find it is a neat trick if perfect results are not needed :
http://www.slideshare.net/acbess/order-independent-transparency-presentation

Very nice and clear, thanks a lot ZbuffeR!

I recently had a go at the weighted average transparency. My conclusion was it works … but is not without problems. Frankly it turned out to be a waste of time. Dual depth peeling works, but it’s real slow unless you have super high end hardware, and how can you know arbitraryly how many passes you might need?

To be honest if you have your data already layed out in a binary space partition tree. Go with that…

dukey, what sort of problems did you have with WAT ?

Well for starters if you have 2 transparency values over 0.5 the result is wrong. It blends them together you totally lose the feeling one object being in front of another. The more objects you have like this, the worse it gets. I had other problems where in my project where i was blending shadows over reflections. If the reflection opacity value was zero or close to zero, you could see the reflection still somehow in the shadow. The problem of zero opacity. Screws the maths somewhat. Then theres the fact well in the nvidia sample code anyway it requires the scene have a constant clear colour. Ie you couldn’t blend over a scene with a non uniform colour. I did figure out a method I was going to use to solve this, but since the above didn’t work out so well I just scrapped the whole thing. I think the nail in the coffin for me was the fact that the last generation of ATI cards has shocking support for floating point blends. RGB float 16 kinda worked, although every poly had a poly crack. rgbf32 didn’t work at all with blending.

So a failed experiment.

My conclusion was, BSP is where it’s at. Pitty most model formats aren’t actually BSP trees. Maybe the model format I should have chosen for my own project should have been quake bsp trees !

Besides multi-pass depth peeling approaches, there’s also been some papers/presentations out recently doing A-buffers on the GPU. Essentially single-pass OIT (with some constraints). For instance, there was a cool presentation last year at SIGGRAPH doing smoke/clouds/translucent volume shadows (google Adaptive Volumetric Shadow Maps). Stores a few samples per fragment to represent a piecewise-linear visibility function. I think ATI’s OIT demo a year or so ago did A-buffer too (google ATI OIT to get right to it). That said, I personally haven’t implemented A-buffer on the GPU yet. Let me know how it goes! :wink:

But seriously, it’s worth giving a serious think to how translucency occurs in and can be constrained in your world. With just a few assumptions/limits, you may find you don’t need some super-fancy sub-pixel-sorted-blending-accurate technique that handles intersecting triangles between or within objects or arbitrary volumetric transparency with full refraction, internal reflections, subsurface scattering, and cross-translucent-volume global illumination lighting interactions. Ray tracing anyone? It may be cost/benefit cheaper to resolve certain issues using other approaches.

Besides intersection permutations (that you can’t preprocess away), other things to consider: are your translucent objects all effectively 2D (flat sheets, like windows, canopies, etc.)? If not, are they convex? How are you modeling transmission through the medium (e.g. refraction, color shifting, scattering, etc.)? If objects intersect, do the component triangles of those objects intersect? Do you model partial reflection (e.g. fresnel reflectance)?

Thanks a lot for your valuable opinions, I really appreciate.

Even BSP has is own drawbacks. You need to clean models from invalid triangles, do pre-processing each time the model changes, decide how many candidates to take to balance the pre-processing time vs. FPS.

After some years I always ask about transparency on this forum and the answer is always “it depends…”, transparency has to be the most complex thing on earth to solve completely :wink:

@Dukey: regarding Dual depth peeling why you cannot give the user a level number option an leave him/her to decide how accurate the transparency will be?

Pitty most model formats aren’t actually BSP trees

Why aren’t you doing BSP at triangle level?

Transparency can not work with a depth buffer when order is important. So it is the same problems that existed with opaque geometry before depth buffer was widely available. Painter algorithm, precise BSP, pre-sort stuff, or more recently, emulate several depth buffers with more involved shaders and passes.

It definitely exposes the limitations of the standard pipeline. Here’s one perspective that explains why translucency is considerably more difficult than opaque, especially in the general case.

The standard pipeline is built around smashing the entire world onto the framebuffer, keeping what’s closest, and approximate shading at each pixel (or sample) so all pixels can be processed independently. In ray tracing terms, this means looking only at primary rays – that is rays propagating back from the eye to the first thing light “hits” (of course, this is the reverse direction light actually propagates).

For basic translucency, that’s not enough. And for general transparency that’s not near enough.

For translucency, you have to consider secondary rays, tertiary rays, etc. (2nd, 3rd, 4th, etc. level light interactions) away from the eye. That is, reflected or transmitted rays spawned off where the primary rays hit. And you may need a number of such rays to approximate the shading sufficiently at that primary ray hit point (to compute what you can see through that surface). The rasterization pipeline isn’t optimized for this – at all. Ray tracing renderers OTOH are.

There’s lots of coherence in computing primary light rays with local shading (massive SIMD parallelism = perfect for GPU), but soon as you start transmitting and reflecting child rays around you quickly lose that coherence.

So of course in the uber-simple realtime translucency case, you may just ignore the possible change in ray direction/speed and ray splitting (reflected/refracted) that may come with light passing through a translucent surface boundary, and try to just model a color change due to absorption. But then you still have to smash everything on the screen in the proper order because the default pipeline doesn’t keep track of but more than surface at each pixel (fragment). Though you can write your own pipeline to do that.

We all know what what translucency looks like in the general case, but just as a reminder so we don’t lose track of reality talking about cheap real-time rendering hacks: :cool: