sampling the depth buffer

I am currently working on a depth-of-field post-processing effect which requires to sample the depth buffer to get information about the depth of each fragment of the framebuffer. When I use GLSL on a Geforce 6800, I know I can create a depth texture and render into it, as well as reading back the depth texture by sampling it in a GLSL shader (by calling texture2D and retrieving the red component).
However I have a friend working in HLSL/DirectX9, he can render into a depth texture he created, but he cannot sample it from HLSL (it will always returns 1), altough it can still perform shadow test comparaison using tex2DProj (hlsl). Since it seems to be one of the new DX10 features, I was wondering how come I can do it in GLSL? Is openGL somehow emulate this feature? I would appreciate any help on that! Cheers!

i doubt its a new feature in dx10 since youve been able to sample depth textures in gl for years.

so its either

A/ perhaps he’s not using the correct terms in HLSL
B/ in wasnt exposed in d3d until d3d10

As a D3D programmer by day, I can tell you that this functionality has been around in hardware for a few years now, and has been exposed in OpenGL for just as many. However, it’s never been exposed in Direct3D. Because of trends like these in D3D and the new direction of OpenGL, I’m actually planning on steering our studio away from Direct3D and in to OpenGL. I think it’s absolutely crazy that D3D doesn’t expose this functionality. Also, there are plenty of other useful graphics card features not exposed in Direct3D9, so don’t be surprised if you come across more things like this in your career. Given past trends, I imagine that D3D10 will have similar problems in the next few years as hardware rapidly advances.

Kevin B

It’s quite funny, as I’ve personnally been thinking of doing the contrary: switching from OGL to D3D. I’m tired of waiting for extensions/drivers able to create a FBO with antialiasing and 1001 small other things. To me, OGL has become a monster that is soon going to collapse under its own weight (hundreds of extensions). OGL 3.0 might be a step in the right direction, but seeing the mess that happened in the past, I’m not confident in the future; plus we’ll have to wait years to get stable and mature OGL 3.0 drivers, as usual… Sorry for being a little off-topic.


Thanks for your replies. My friend finally resolved to render depth values into a separate render target, which is a real shame(!). I personnaly think OGL3.0 will be great and we will definitely start seeing more games programmed with it.


D3D can be a real flaming pain in the breech at times, and has on many an occasion given rise to a great many fist shakings and foot stompings for yours truly. Take the need to offset texcoords or vertex positions to get texture mapping to work the way it should. GL got this right from the get-go; I’ve reimplemented a bunch of post-processing stuff and it’s much cleaner in GL (this just so happens to be a current source of fist and foot action for me). I’ve found other examples of things that just seem wrong in D3D after working with GL, like an asymmetric view volume z, and a window/viewport origin in the top left rather than bottom left, which makes sense for 2D, but seems strangely inappropriate in 3D. Anyway, I’d say you have an equal chance of being boned by any API, in one way or another, depending on your perspective and what you’re looking for.

I think with Longs Peak and Mt. Evans around the corner, things are going to look a lot better in the areas where the API has been difficult at times. From what I’ve been reading here in the threads and in the newsletters, they really seem to have things on solid footing now, simplified and streamlined. Me thinks that ultimately I don’t mind a bit of a wait if it means some real API goodness. After all, one can’t rush richness (though one can certainly try ;-)).

Happy Easter,