What I want is to just be able to set the alpha of the model using glColor4f(r,g,b,a), and have it be transparent, and depth sorted according to the cameras position. I don’t get proper results unless I sort out each polygon individually, every time the camera is moved. I am using opengl 1.4 with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I have been searching, and haven’t found much. I am wondering, is there a specific blending configuration or depth test configuration (if it is on you can’t see through transparent objects while if it is off it doesn’t blend, it just draws) that will allow me to simply declare the translucency and have it be translucent (similar to a game engine), without having to manually sort all the polygons? In short, I want each of my translucent polygons to be alpha blended based on their alpha, and regardless of which one is on top.
…is there a specific blending configuration or depth test configuration … that will allow me to simply declare the translucency and have it be translucent … without having to manually sort all the polygons?
One that I’m aware of (screen-door transparency). See below.
More on the general transparency issue here and why it isn’t just as simple as rendering opaque objects:
That said there are some (what’s termed) order-independent transparency (OIT) techniques, where you don’t have to sort. The sort is handled through other means or is handwaved altogether.
Screen door transparency is one way that hand-waves it:
giving you transparency, but low-quality transparency. This is built-in GL behavior. Just enable when rendering transparent objects and that’s it. But again, low quality transparency that’s limited by the number of samples per pixel you’ve allocated.
However, pretty much any other technique requires some algorithm work on your part (deep framebuffers, depth peeling, deferred alpha blending, etc.).
What exactly do you mean when you say ‘low quality transparecy’? Also, I have been looking into the algorithms… but they all seem to require shaders- something my opengl 1.4 cannot handle.
What exactly do you mean when you say ‘low quality transparecy’?
He means that instead of actually being transparent, the different objects will effectively have a “screen door effect” on them. This is often useful for modeling tools, but isn’t particularly useful if you’re trying to generate actual images.
Also, I have been looking into the algorithms… but they all seem to require shaders- something my opengl 1.4 cannot handle.
Then there’s not much you can do.
Build a BSP tree out of your polygons. They will come out depth sorted then regardless of your position in the world.
Well this technique definitely doesn’t require shaders.
Re low-quality: it uses a subsample dither mask to determine which samples to set in a multisampled (MSAA) framebuffer. When the multisampled scene is downsampled, it looks like you can see through the object. But you often end up with a sprinkle or crosshatch look reminiscent of a “screen door”, rather than perfectly uniform transparency.
Sometimes it can look pretty good though:
You’ll just have to try it. Play with the alpha values and capture some screenshots. Compare to your results with BLENDing enabled and SAMPLE_ALPHA_TO_COVERAGE disabled. Also, try in a window with a higher number of multisamples.