I was using FBO’s with packed Depth/Stencil and was not getting any results.
I check the specs and don’t see anything obvious - code looks fine.
After a long time I figured that the StencilMask was set to 0, even tho the spec clearly states “Initially, all bits are enabled for writing”
So I though this was a driver bug, but it seems that if your main frame buffer does not have stencil, the stencil mask is set to zero. (which does seem kinda reasonable, but still a bit of a gotcha)
This is on Nvidia 162.18 drivers.
Thanks for sharing that iformation with us. May save some trouble.
That sounds like a bug.
The stencil ref, mask, backref, backmask should be kept as 32 bit values on specification, and re-clamped to the available number of stencil bits on each framebuffer bind.
So you should be able to create a window context with no stencil buffer, set the ref/mask to 255. Querying the ref/mask now will return 0. But then bind to a fbo with stencil attachment, and the ref/mask (and query) should be masked to the number of bits in the attachment, without you ever re-specifying the values.
THANKYOU THANKYOU THANKYOU!!!
I spent FOUR HOURS trying to figure out why my stencil buffer wasn’t working before I came across your post.
NVidia, I really hate you right now. Please fix this stupid bug. My main window doesn’t have a depth buffer and fbo depth buffering worked fine, why should stencil buffering be any different?
Just to be clear, you can easily work around this bug by doing a:
on startup once you have created your context.
Not quite - creating the context and immediately calling glStencilMask won’t work. However, calling glStencilMask after creating an fbo with a stencil buffer does work.