FBO spec posted

I just cruised through the document. I will have to re-read.

There are errors in the examples.
Look at #4, where the for loop starts. color_tex_array[N] instead of color_tex_array[i]

The examples following it also have the same error.

I’m guessing you guys are not experimenting with a sample implementation while you write these specs, right? It would be good, cause as a bonus you can release a software renderer for us.

An important question for me. When we use glFramebufferTexture2DEXT (or the others), are the previous contents of the buffer considered undefined, or they are preserved?
What about the other buffers like depth and stencil? Preserved or not?

Another question---------
Do you think it would be an interesting feature to be able to render to 2 faces of a cubemap at the same time?

Originally posted by V-man:
An important question for me. When we use glFramebufferTexture2DEXT (or the others), are the previous contents of the buffer considered undefined, or they are preserved?
What about the other buffers like depth and stencil? Preserved or not?

From my first pass of reading, the contents of the previously attached image, if any, are preserved.
Wouldn’t make much sense otherwise.

Got about halfway through reading the spec. Good job, guys. I eagerly wait an implementation. Korval might complain about ARB slowness, but I’m pretty glad you guys are as thorough as you are.

As for rendering to multiple cube-map faces, I believe this falls (in spirit) under issue 44, which resolves as “undefined behavior.” (This is my interpretation, so grain of salt):

Issue 44 deals with potential read/write hazards, and the potential for rendering to multiple cube-map faces is kind of a write/write hazard. Of course, rendering to different faces of a cube-map guarantees you to be hazard free, but the language about valid render texture is always regarding the texture objects and not texture targets. This is the “concern” in the issue about binding one level of a mipmap as texture and and another as target for custom mipmap generation.

My guess is that it is technically undefined, but probably safe (like custom mipmap generation). Certainly, it seems alot less dodgy than rendering to a bound texture to avoid ping-pong rendering in GPGPU applications. AFAIK, the latter behavior has been unofficially “blessed” by ATI and NVIDIA.

But, why do you want to render to multiple cubemap faces at once, anyway? To do useful stuff you probably need multiple post-transform vertex streams!

-Won

Thought I’d bring up a number of typos in the “Issues” section. Don’t know if anybody on the ARB cares.

Issue 8: “of absense of” ==> “or absence of”

Issue 37: “combersome” ==> “cumbersome”, also glDrawElement is redundant with glDraw{Array|Element}

Issue 41: “realted” ==> “related”

Issue 41, B3: (capitalization) “STENCIl” ==> “STENCIL”

Issue 55: “ont” ==> “not”

And that’s as far as I got.

More errors:
Issue 37: glMultiDrawElements and glMultiDrawArrays are missing.

Issue 28: “This parameter could be have been called <…>”
=> “This parameter could have been called <…>”

Korval, from the “Multiple Render Targets” help page from the December 2004 DX SDK update:

“All surfaces of a multiple render target should have the same width and height.”

I can’t see how anything else would make sense.
From this MSDN page , you can clearly see that the size of the depth/stencil buffer is not bound to the size of the color buffer(s). It only needs to be bigger than the color buffer(s). While, yes, the size of color buffers must all be the same, they are not bound to the size of depth buffers.

One last error from me:

Issue 62: “Contxext” ==> “Context”

As Korval points out, alot of things in this extension are deferred to layers or to ARB/Core promotion, but after going through all the issues I realize even the decision of which issues to defer (and how to defer them) is pretty involved.

Was there really such thing as EXT_Compromise_Buffers?

-Won

Was there really such thing as EXT_Compromise_Buffers?
My guess is that this was just a working name after they abandoned Superbuffers. No need to waste time debating an actual name when more substantive issues are on the table.

Oh, one question about ARB_FBO. It seems like one can only bind textures and so forth to application created framebuffers (as opposed to the default one in the context). If so, why was this put into the spec? It seems rather limitting, though I can kinda see how interacting with the default framebuffer can be somewhat… difficult to specify.

Originally posted by Korval:
Oh, one question about ARB_FBO. It seems like one can only bind textures and so forth to application created framebuffers (as opposed to the default one in the context). If so, why was this put into the spec? It seems rather limitting, though I can kinda see how interacting with the default framebuffer can be somewhat… difficult to specify.
Hi Korval,

Note this is EXT_FBO. It was developed by the ARB, but I’m glad we didn’t rush to put the ARB stamp on it without putting some miles on the odometer. OpenGL is better served by proving extensions before carving them in stone.

On your specific question, interaction with the window-system framebuffer was a hornet’s nest. It has issues like pixel ownership test, multisample, and other stuff that would only have slowed things down.

My philosophy on this was “the simpler, the sooner” and even trying to keep things simple, there were tons of issues to work out.

Thanks -
Cass

My guess is the problem is the pixel ownership. What should happen if you bind for example a color texture to the default framebuffer, leaving the depth buffer alone, and you don’t own some pixels because of overlapping windows?

Just don’t own the pixels when you don’t own them in some buffers? This is contraproductive, you want the whole image in the texture, not just a part of it. On the other hand, the ownership test can’t possibly be positive because in the depth buffer some pixels just don’t exist…

EDIT: Too slow… :smiley:

Korval, my mistake. I thought you were talking about multiple render targets with different sized colour buffers. Dunno enough about depth buffers to discuss them.

Just a short question about Color attachment points: Why don’t you stick to the AUX Buffers and added these new atachment points?

The rationale is discussed in one of the issues.

Thanks, I’ll take a look at that.

When a texture object is deleted while one or more of its images is
attached to one or more framebuffer object attachment points, the
texture images are first detached from all attachment points in all
framebuffer objects and then the texture is deleted.

If a texture object is deleted while its image is attached to one or
more attachment points in the currently bound framebuffer, then it
is as if FramebufferTexture{1D|2D|3D}EXT() had been called, with a
<texture> of 0, for each attachment point to which this image was
attached in the currently bound framebuffer. In other words, this
texture image is first detached from all attachment points in the
currently bound framebuffer. Note that the texture image is
specifically not detached from any non-bound framebuffers.
Detaching the texture image from any non-bound framebuffers is the
responsibility of the application.

this is a little bit confusing…

Is there any other way to get the data out of a Renderbuffer (like glGetRenderBufferData) than attaching the Renderbuffer to a Framebuffer object, binding the framebuffer, call glReadBuffer an call glReadPixels? If not, why?

Typos:
section 4.4.3
Doing so could lead to the creation of of a “feedback loop”…

Is there any other way to get the data out of a Renderbuffer (like glGetRenderBufferData) than attaching the Renderbuffer to a Framebuffer object, binding the framebuffer, call glReadBuffer an call glReadPixels? If not, why?
No, you can’t. They probably the same rationale as for not having a RenderbufferImage call. See issues 9 and 10.

Corrail, it’s Issue 54.

  • EXT_framebuffer_object

is this meant to replace pbuffers?
or is there some key difference between fbo and pbuffer im not seeing?

I haven’t read the whole spec yet, but from early presentations I assumed they should be more versatile(eg. detaching/attaching depth_buffers - am I correct?) and faster.
I hope ATI will implement them before the end of my end-of-term examinations:)

Originally posted by supagu:
[b]- EXT_framebuffer_object

is this meant to replace pbuffers?
or is there some key difference between fbo and pbuffer im not seeing?[/b]
See issue 2:

RESOLUTION: This extension should fully replace the pbuffer API.

Jan.

Originally posted by Korval:
No, you can’t. They probably the same rationale as for not having a RenderbufferImage call. See issues 9 and 10.
I see why there’s no glRenderbufferImage but getting out the data of a render buffer in that way seems to be a little bit to complex to me. What about GPGPU applications which need the rendered data for further processing? I think a glGetRenderbufferData function would be handy.

Originally posted by ffish:
Corrail, it’s Issue 54.
Thanks, now its clear