Need some architecture / pattern advice from OpenGL gurus…
Up to now, I’ve simply ignored the fact that Intel OpenGL drivers are dire and the software I write, run by people with Intel chips fails - well they’ve made a poor choice in GPU, their problem…
But lately, I can’t ignore the fact that there is a huge number of Intel GPUs - so they may be crap - and its not my responsibility - but ultimately I’m turning my back on a potential large number of users. Telling them to nag Intel etc is pointless - they’ve got better things to do.
So I spent 4 hours yesterday trying to find the “golden code path” in Intel’s support for FBOs that work - (the classes I have work flawlessly for Nvidia and ATI). Still haven’t found that random state settings that works - but when I do, how to incorporate a bunch of “jumping through hoops” in the code to keep the Intel driver happy into a code base?
In that, I suspect that a stable driver from NVidia/ATI will also be happy with whatever state it turns out I need to avoid Intel bugs - but it pollutes all the code.
And if anyone has any top tips for using FBO on Intel - please share!
The last major opengl program I wrote, worked 100% fine on Nvidia and ATI cards. Actually on older ATI cards had 1 driver bug, which was known, but they refuse to fix bugs on older drivers :s I used gdebugger to check for errors and other API violations and everything came back fine.
I had the same problems using FBO’s on Intel cards. glClear would clear the wrong size viewport and thus produced total garbage on the screen. I had to code some pretty obscene intel only work arounds. But still I am getting support requests with problems on Intel cards.
Is there some official way we can complain or file driver bug reports to intel? Because it’s a real headache. Intel represents something like 50% of the graphics card market share, and we have a LOT of customers that have intel machines.
" glClear would clear the wrong size viewport and thus produced total garbage on the screen."
Having the same problem! I simply have a 512x512, 256x256 and 128x128 FBO that I use to downsample images and after 1 iteration, setting the 512x512 FBO + viewport again seems to be totally ignored.
All the state Differencing debuggers show no difference in a snapshot of state at the start of the first iteration and the next - yet glClear clears a 32x32 square in the corner of the 512x512 FBO!
Please tell me you have a solution.
Apart from Scissor and Viewport, am I correct is saying there is nothing else that is meant to effect pixel ownership? ie If I set a Framebuffer, set viewport and disable scissor, it should clear the whole FBO. Right?
Complaining to Intel makes no difference - they’re too big to fail, and they know it. Stuff like claiming EXT “packed_depth_stencil” when it clearly is not supported does irritate me though.
I’ve reported driver bugs to Intel before, and have actually had follow-up from them (but I’m uncertain if a fix was ever implemented). The place I did it was on their technical forums, and follow-up from an engineer happened within 2 weeks or so.
Be prepared to have code for a minimal repro case ready, of course.