FBO, not newbie question

hey everybody,
In my application I use an fbo with different attachments. To be honest the attachments are quite heavy as I’m implementing some crazy deferrred shading. They are not really optimized but that’s not the point. Basically i have:
1 DepthBuffer, 1 ViewSpacePosition Buffer(easier computations), 1 NormalBuffer, 1 MaterialBuffer, 1 TexCoord buffer (for procedural textures), 1 unsigned int buffer (some metric).
All works well whenever the resolution is 1 or 2 times my window resolution. But when I go higher (like 4 or 5) I simply get an “GL_FRAMEBUFFER_UNSUPPORTED_EXT” when checking the status. I’m not running out of GPU memory though. Is that a driver issue?
What are the maximum size for the rendertargets?
thanks in advance
zqueezy

Config:
Windows 7, GTX 480 (1.8GB), OpenGL 4.1, latest glew
Yes, they do have different formats and my shader layout looks like this:


layout( location = 0 ) out vec4 			o_vViewPosition;
layout( location = 1 ) out vec3 			o_vNormal;
layout( location = 2 ) out vec4 			o_vMaterial;
layout( location = 3 ) out vec2 			o_vTexCoord;
layout( location = 4 ) out unsigned int 	o_uiMyMetric;


If it’s not a newbie question you’re in the wrong subforum here i guess.

Maybe try glGetIntegerv with GL_MAX_RENDERBUFFER_SIZE?

hm… that was already a good hint… unfortunately I still get GL_FRAMEBUFFER_UNSUPPORTED_EXT. especially because my GL_MAX_RENDERBUFFER_SIZE is 16384!
all textures have the same size but different format. maybe the driver cannot cope with that for certain dimensions?

Theoretically you should be able to use arbitrary combinations of size and format without generating any errors.

Are you sure your renderbuffer size is less than or equal to 16384 * 16384? What’s the maximum size that works?

Thanks, for trying to help me.
yes, I check for the size. All rendertargets (and also the fbo) are recreating when resizing the application.
my gbuffer is 3 times my window-size.
When my app hits 1415x908 (so the GBuffer should become 4245x2724) it crashed. The last valid size (which worked perfectly) was app size 1352x881 => GBuffer size 4245x2724. Maybe some minor values inbetween work…
When I force my app to be 4245x2724 it gets cropped by glut. But using any window size and this gbuffer size gets me the same “unsupported” message. I tried this texture size with the “simple framebuffer object” demo from the NVIDIA SDK and there it works. But they, however, use only 1 texture and I’m kind of lazy to change all their attachments and formats to mine. The point that my app works for being below a certain “threshold” let’s me think that my creation methods are alright.
Unfortunately my codebase is quite complex, so that I cannot post a simple test (yet?).
thanks anyways… maybe I’ll try a different driver version (current is 280.47)

Specifically, what formats are you using? What are the internal formats of the textures?

Also, what happens if you attach 5 GL_RGBA8 buffers?

1 depth texture (I can switch between 24 and 32 bit)
1 (viewspace-)position texture 4 float RGBA GL_FLOAT
1 normal texture 3 float RGB GL_FLOAT
1 material texture 4 float RGBA GL_FLOAT
1 tex coord textrue 2 float GL_RG GL_FLOAT
1 “metric” texture 1 unsigned int GL_R32UI (GL_RED_INTEGER) GL_UNSIGNED_INT

I’m not sure if I have the time to post a small-scale example… but i’ll try (but as I said, for lots of window sizes it works without an error)

Your 4245x2724 is 11 Mtexels per texture. From the above, sounds like around 60 bytes/texel, assuming 1xAA textures. So that’s around 660 Mbytes. That’s pretty huge.

When you take total GPU mem less your other needs (texture, system FB, etc.), are you sure you have 660 MB left?

One thing you might consider, if only for testing, is using smaller texture formats (fewer components, smaller components) to see if you’re hitting a memory issue. Things like using RGB10_A2 for normals. Or using half-float textures instead of full-float. Trimming unused components from textures. etc.

Also, since you’re using textures seems to me the key here is max texture size not max renderbuffer size. But GL_MAX_TEXTURE_SIZE is still probably huge on your HW.

Also, use NVX_gpu_memory_info (thread link) to see how much GPU memory you actually do have free, if any.

Yes, I guess there is some memory problem which is hard to figure out. I create the textures before I bind them all and they are created without error.
the problem (GL_FRAMEBUFFER_UNSUPPORTED_EXT) occurs here:


glFramebufferTexture2DEXT( ... ) for all textures
glDrawBuffers( ... )
glCheckFramebufferStatusEXT( ) // <--HERE

in between I do several glGetError that don’t return anything invalid.

the fact that maybe confirms this memory theory:
if I continue my program after the failed glCheckFramebufferStatusEXT
it looks a little bit like this:


... //stuff
glGetError()  // ... nothing wrong!
glBegin()
 ... // quad for displaying my rendertarget
glEnd()
glGetError()  // <-- OUT OF MEMORY

I never had a out of memory for a glBegin/glEnd especially because i basically draw only 4 vertices :slight_smile:

I’m trying to use GLExpert to maybe confirm this problem.
unfortunately I get “NVAPI_INSTRUMENTATION_DISABLED =>OpenGL Expert is supported, but driver instrumentation is currently disabled”. whatever this means. I’m running Windows 7.

BTW: I wrote a mini-app with the same targets that is so far running without crashing. but there is no other GPU memory used for shaders and meshes. I’m still not sure if the error might be on my side but I really start to doubt it