I need some help I have 2 cuboids I want to show with transparency. The cuboids are constructed from Vertexes (vertices)

glBegin(GL_QUADS); // Draw A Quad
glVertex3f( nXPoint, nYHeight + nYPoint,-fDist);
glVertex3f(nWidth + nXPoint, nYHeight + nYPoint,-fDist);
// And so on

One cuboid is Grey and the other green
I’ve tried
But this has no effect.


Yes, you need to specify an alpha value for each cube. Use glColor4f instead of glColor3f, and choose an alpha of less than 1.

while we are on the topic, why should we always draw transparent objects from back to front? does this have to do anything with depth testing at all? Just wondering…

Actually it has nothing to do with depth testing… it has to do with how blending is calculated and how it is affwcted by what is behind it. As nehe said…the light passes through the farthest back one and moves through each successive one until the first one. I don’t know exactly what it all means but check out , go to the opengl tutorials and go to the 9th one on blending on the bottom i beleive he talks about that.

hum, it’s quite simple : suppose that a solid cube is behind a transparent one.
if you first draw the transparent, it will be blend with the black back ground, so the blending effect will not occure. then if depth test is on, as the solid cube is behind, it won’t be drawn…
if you draw the solid first, it will be ok.
so to draw a scene with blend polys, first draw solid polys, then sort the polys with alpha<1 (with a radix sort), and draw them back to front.

why can’t I draw the scene, and opengl then figures out the how light will effect what? rather than as I draw? i that really slow? that’s my guess… like you give OGL an object, it takes that, and maps it to the current viewport, and then if you give it a transparent object that takes up the space in the viewport which is occupied by object1, it takes the color from the viewport and modifies according? This is a wild guess, but is this what it does?

another thing, since shawdows are supposed to be written by the programmer…

lets say object1’s shadow is completely covering tiny object2. But I am not writing the code to do shadows… so would object2 be illuminated as much as if object1 was not there?

OpenGL is an immediate mode API, not a scene graph API! OpenGL does not know of objects, just of geometric primitives and fragments.
Blending occurs at fragment level (usually one pixel). The blending is a read-modify-write operation with the data from the pipeline and the data on the color buffers.
So, yes, it’s using the informations on the frame for the blending. The difficulty is that the blending effect is therefore dependant on the things already on the screen. Some effects need depth sorting to achieve the right look.

To shadows and lighting: The lighting calculation of OpenGL is per vertex and doesn’t know of any other things like “objects” between vertex and the light.
That means the lighting calculations occur undisturbed of any other thing in the scene.

You can enable attenuation effects though, so that the tiny object which is farther away from the light gets lit less. But that’s nothing compared to shadows.