Rage - what a mess

I guess that about sums up the nature of our debate. I freely admit to being a wishful thinker in this regard.

And aaah yes the epic Barthold Lichtenbelt post! I do now remember reading that when GL 3.0 came out, and much of the anger and frustration that followed. The thread is a (long) letter of frustration written by users not unlike me who worried that the API’s leadership was unable to get things done and that the API would suffer as a result.

Your water treading analogy is apt. About the only thing that’s changed since 2008 is that we do now see more frequent additions to the core spec. This is progress, of a sort.

PS - OpenCL comes up briefly in that thread as well. This is one area where I think the ARB made an excellent decision, separating that API from OpenGL but allowing them to intercommunicate. Microsoft did the opposite with their compute shaders, and I think may regret that decision down the line. But of course, they actually make changes to their API from time to time so they will have the option of correcting it if necessary later on :stuck_out_tongue:

Alfonse really hit the nail squarely on the head when he noted that GL implementations are buggy at times because there is not a lot of code out there that tests the new-ish features of GL4 and at times GL3 too. I remember a time when gl_ClipDistance did not work on some hardware and driver combination and firing off an e-mail with example code got it fixed by the next release.

The “cleanup” GL to use a more object model is, at least in my opinion, already slowly being added to the GL spec. The biggest bit will likely come when all the non-fixed function pipeline parts of EXT_direct_state_access get mutated some into the GL specification. We already have setting values of uniforms of GLSL programs without binding the programs (though ActiveShaderProgram for program pipelines looks like a touch of a hack for a spec). Direct edit for sampler objects is in the spec too.

The main stinks for a lot of folks on the GL API is the bind to edit for textures, framebuffers, bufferobjects, and a few other object types. I bet that is going to be sorted out in the specification at the next release, or at the very least alot of it will be sorted out.

I freely admit though that I feel like GL is now in the mode of just trying to keep up to D3D… we got RWTextures in 4.2, we don’t have in the 4.2 spec yet read-write to buffer object data (there is an NVIDIA extension though to do this and more). I would like to see in GL4.3:

(shameless copy-paste from http://msdn.microsoft.com/en-us/library/windows/desktop/ff476342(v=vs.85).aspx)

  • [li]Coverage as PS Input[*] Programmable Interpolation of Inputs - The pixel shader can evaluate attributes within the pixel, anywhere on the multisample grid

other things like reading stencil buffer value are still not in the spec, or as an extension… which is odd considering that D3D10 I think had that… maybe the hassle for how to fit the idea of a depth-stencil texture to a sample… since the current bind to use model already has so many enums… adding a whole family of enums just to read some component seems kind of ugly… maybe be add that magic to sample objects… but that is not nice since it also kind of acts like swizzle which is part of texture object :stuck_out_tongue: The other bits of making command objects thingies and setting data in another thread would be really nice too… we can make multiple GL contexts to do the sharing, but really we want a special context type that cannot render, only set “object” state… We do have debugging in GL now pretty good too with GL_ARB_debug_output…

Oh well. Things are getting better me thinks. Might be that it is really getting better for the goal of having GL be well in embedded… as that is where money is now for GL… Windows Phone is not exactly a popular platform really.

But just so you all know: as a general rule of thumb, GLES2 implementations are far, far buggier than GL desktop implementations. Moreover, the desktop GL API (be it core or compatibility profile) is far more pleasant to use and deal with than GLES2.

Really? Then perhaps a fresh new API on the desktop is not a good idea.

Besides, we don’t need to worry about a new API since there are no plans for it.

GLES2 is not fresh at all… it is essentially OpenGL2 (with all the craggy bind to edit stuff) intersected with core profile of OpenGL3 minus a lot of stuff… i.e. using GLES2 is like the core profile restricted to OpenGL2 functionality minus a lot of stuff. That stuff that is not present in GLES2 includes: hardware clipping, almost all read back support (be it images or buffers), no mapping of buffer objects, much more limited modes of buffer objects. Lastly, GLES2’s texture image specification API is brain dead. A fair amount of stuff one takes for granted on desktop is either gone or only available as an extension. Take a gander at the gles registry: http://www.khronos.org/registry/gles/ to see the list of extensions for GLES2 (and GLES1) of stuff that should have likely been in the spec.

Coverage as PS Input

We already have that. Technically, we had it for a while, but the GLSL spec was broken. The GL 4.0 specification mentioned the behavior of gl_SampleMaskIn, but the GLSL specification never mentioned it until GLSL 4.2.

It seems fresh in the sense that the drivers are much simpler and if they still produce very buggy drivers on that platform, then there is no hope for a fresh API on the desktop.

Isn’t it better that all those extensions are not in GL ES? The drivers would be simpler.

I’d say the ES driver issues probably have more to do with the sheer proliferation of different hardware architectures than anything to do with the API’s complexity. Even if you just look at ES 2.0, there is a lot of different hardware out there. Even though most iOS and Android devices are powered by PowerVR GPUs, they don’t use the same chips. They have different SOCs, so Apple and the various Android device makers have to write different drivers. The entire driver won’t be rewritten of course, but some of the higher-level components will.

And that doesn’t count non-PowerVR architectures, like Tegra and so forth. At least on the PC, you’re only dealing with really two drivers: ATI and NVIDIA (assuming you don’t care about Intel).

Also, while it is easy to just tell someone “update your drivers” on the PC, that’s generally impossible on mobile devices. They aren’t updated as regularly as PCs, so if a device/driver has a bug, it will likely keep that bug for a good long time. Maybe forever; mobiles are more like laptops than PCs.

What’s silly is this: OpenGL ES 2.0 already has a conformance test. Yet there are still plenty of driver bugs.

I’d say the ES driver issues probably have more to do with the sheer proliferation of different hardware architectures than anything to do with the API’s complexity. Even if you just look at ES 2.0, there is a lot of different hardware out there. Even though most iOS and Android devices are powered by PowerVR GPUs, they don’t use the same chips. They have different SOCs, so Apple and the various Android device makers have to write different drivers. The entire driver won’t be rewritten of course, but some of the higher-level components will.

I cannot say what Apple does, but all the other SoC maker folks that I’ve dealt with that use someone else’s IP for the GPU’s more or less take the drivers almost directly from the GPU designers. Some will add functionality to the drivers, but the usual case is very, very little is added, if anything. Typically, what an SoC maker needs to do is make EGL or something like EGL and to that effect most of the GPU makers provide entry points so one can do what is needed. The lists of GPU’s that is much bigger than SGX in the Android world: both Qualcomm and Broadcomm have their GPU offerings… and naturally there is more ARM’s Mali, Vivante and MORE!

The GLES2 driver breakings I have seen in released hardware:

  • [li] buggy, unreliable GLSL compilers[] buggy FBO behavior[] lots of bugs in support for floating point textures, though half floating point is more likely to work often

I shudder to list what I have encountered in alpha and beta hardware/driver combos. Have not yet dealt with Freescale iMX 6 series, but the 5 series and before where nightmarishly bad.