Has anyone gotten this to work with FBO’s? I am not having any luck. Right now there isn’t any usage examples, so here is what I have

glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT32F_NV, texWidth, texHeight);

glTexImage2D(texture_target, 0, GL_DEPTH_COMPONENT32F_NV, texWidth, texHeight, 0,  GL_FLOAT_32_UNSIGNED_INT_24_8_REV_NV, GL_FLOAT, NULL);

//init code
	glDepthRangedNV(0.0, 1.0);

I must be missing something? Do I need to setup the PFD to have floating point buffer?

Let me ask you, what do you need exactly?
Why do you use renderbuffer for depth and depth texture to a color attachement? As you remember, FBO (till now) supports only RGB and RGBA color attachments, and therefore depth_component is not the case anyhow ((

? Depth buffer is used with a renderbuffer. I want to use the fp depth buffer extension. Not understanding what you are after. I don’t have the depth rendering to a color attachment…

GL_DEPTH_BUFFER_FLOAT_MODE_NV if this returns 0 what would be the cause of this? The specs say nothing about what it should return or if it does what it means? I got it so it doesn’t crash, but this returns 0 as of right now?

I am beginning to wonder do I need to setup a FP pixel format with wglChoosePixelFormatARB()? vs. the standard windows ChoosePixelFormat()? I don’t see much if any difference in shadows with

Okey, do you want to get a depth texture after FBO rendering? Or you just need a depth renderbuffer? This is two different cases.

I asked about what do you need, because you have 2 different use-cases, mixed together in your code example, and this is naturally not allowed.

Just tell, what do you want, I’ll try to help you.

I think the problem is I don’t have a FP depth buffer setup with wglChoosePixelFormat? Do I need to use that instead of ChoosePixelFormat()?

I am using a depth texture that comes from FBO rendering. So I attach the depth texture to the framebuffer. And as far as I know you need to attach a renderbuffer to select depth buffering support for depth textures… This isn’t the problem, my app doesn’t crash anymore, but the IQ isn’t any better with the GL_DEPTH_COMPONENT32F_NV internal format. I am assuming it’s because of what I have already stated… Thanks for any input.

First of all, NV_depth_buffer_float is supported only on G80. Unfortunately, I don’t have this one to test, but I expect it to appear in the nearest future. This extension gives you only a direct float-float behaviour of window depth, do you really need it in your app?

About renderbuffers and textures: there must be only one © “Highlander” )) If you want to get the result as a texture - you attach a texture. If you want just to use this buffer during rendering and you don’t care about it’s content on finish - then you attach renderbuffer.

Example from the specification: rendering to a color texture using depth renderbuffer:

(1) Render to 2D texture with a depth buffer
// Given: color_tex - TEXTURE_2D color texture object
//        depth_rb - GL_DEPTH renderbuffer object
//        fb - framebuffer object

// Enable render-to-texture
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
// Set up color_tex and depth_rb for render-to-texture
   GL_TEXTURE_2D, color_tex, 0);
   GL_RENDERBUFFER_EXT, depth_rb);
// Check framebuffer completeness at the end of initialization.
<draw to the texture and renderbuffer>
// Re-enable rendering to the window
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_2D, color_tex);
<draw to the window, reading from the color_tex>

Example from the specification: rendering to a depth texture not using color:

(7) Render to depth texture with no color attachments
// Given: depth_tex - TEXTURE_2D depth texture object
//        fb - framebuffer object
// Enable render-to-texture
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fb);
// Set up depth_tex for render-to-texture
   GL_TEXTURE_2D, depth_tex, 0);
// No color buffer to draw to or read from
// Check framebuffer completeness at the end of initialization.
<draw something>
// Re-enable rendering to the window
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
glBindTexture(GL_TEXTURE_2D, depth_tex);
<draw to the window, reading from the depth_tex>

I have my FBO working correctly and it returns complete. I have at most, two color attachments and one depth texture in a single FBO… And it works without error. But this is getting off track from my OP.

So what’s not working correctly? You can’t get back your depth texture in floats?

I am under the assumption with this extension that the precision of the depth texture should be greater therefore less shadow acne for shadowmapping? Or is this not the case.

It seems to be so, but I’m not sure.
Just because 24 bits can be organized in such a manner, that precision may be a distributed value across [0…1] range, when float’s precision distribution may be differ from it. It is said about it also in specification, when they talk about depth offset. But if thinking in general, this might be so, precision must be greater.

Actually, as I said, I haven’t got a brand new 8800GTX )) so I can’t test it to be sure.
Please, refer to somebody more experienced in G80 extensions, I think I can’t help you ((

Thanks for the help Jackis. Being the lone wolf when it comes to owning one of these is somewhat annoying when not many have the hardware to comment on it. :frowning:

You’re welcome )
Hope, somebody will help you!

BTW Jackis, you are correct on the renderbuffer. I didn’t need it. But I was still using the FBO for the depth texture. IIRC someone told me that you needed a RB to have depths… Guess that was wrong. So I made that RB an option when I setup the FBO now. :slight_smile: Also my FPS have gotten faster without the RB being attached. And all works fine now, FP depth works also. Just not seeing much improvements, guess it’s only for very very small amounts in depth.

Being the lone wolf when it comes to owning one of these is somewhat annoying when not many have the hardware to comment on it.
That’s the price you pay for being the early bird :stuck_out_tongue: Besides, it could be that there are lots of owners, but no one experiencing the same trouble you are.

P.S. I’ve got a wicked system built in a wish list, but I’m waiting for the end of January or so to grab it (hoping those high prices will head a hair or two south by then).


You are telling, that you don’t see much improvements. But do you see even any improvement? It’s interesting for me, if this float depth buffer has any sense for shadow mapping precision refining.

As of now if there is any its with small depth amounts… From what Nvidia told me it’s only the very very small numbers that you will see better results with. But again I am running the GL_DEPTH_COMPONENT24 as standard depth comparisons. I am thinking this would be nice for small areas with a large enough shadow map where you could eliminate shadow map acne. Only issue is my shadowmapping is for large outdoor areas so I can’t tell as much vs. a indoor area with a small object with a large shadow map…

As you mentioned, floating point depth buffers really only increase precision in the area close to zero. One item you may also wish to consider, is that extra precision does not necessarily reduce the sort of errors that cause acne. In some cases, it can even make it worse.

Finally, if you need additional precision in the dpeth buffer, a floating point depth buffer can be very useful if you use it in a non-standard method. By storing the compliment of the normal Z (1-Z) you push all the extra precision near zero out to the far plane, where you typically have your Z fighting issues. Obviously, you don’t want to do this in the fragment shader, you can just change the projection matrix to scale the output z by -1.


Or invert the depth range, but don’t forget clear depth and compare function.

glDepthRange(1.0, 0.0);

That should have been the default setting anyway. It works equally well for fixed point depth buffers, but is far superior when a float depth buffer is used.