Official feedback on OpenGL 4.0 thread

Something like 3dsetup would indeed be a good short-term solution to provide people with modern OpenGL drivers. Too bad that Intel isn’t shipping any GL3.x let stand 4.x drivers at this point :frowning:

Another thing which is VERY related to this. A lot of developers encounter OpenGL driver bugs. Some implementations are buggier than others and this frustrates a lot of developers and it is also one of the reasons that some companies don’t use OpenGL at this point.

In order to improve OpenGL driver quality I would urge developers when they encounter problems, to submit test cases to the ‘piglit’ an opengl testing framework hosted at Piglit - OpenGL driver testing framework. At least open source X.org OpenGL drivers are using it as a test bed but nothing prevents OSX/Windows developers to use it as well.

Roderick

That’s too counter-productive, imho. Scenes have 5k+ objects visible, different textures each; fewer programs. It makes more sense to group by program, imho.
In case you meant to bind all N textures for the given mesh-instance at once, I don’t think it’s viable either: those GLuint names are not optimally mappable during shader execution (are not pointers, and should not be).

Progress!

What kind of GL app are you working on ?

Yeah, it’s wishful thinking on my part. But hey, imagine an Intel spokesperson announcing that, “we are planning to ship OpenGL 3.3 support by the end of May!”

Of course, reality kicks soon, when someone asks for the 100th time on how to perform offscreen rendering:

  • User trying to create an invisible window or other broken hacks: “Why do I keep getting garbage back?”
  • Linking to the OpenGL FAQ: “You are not passing the pixel ownership test. Use FBOs.”
  • “But FBOs don’t work on Intel.”
  • “Try pbuffers.”
  • “Nope, no pbuffers either.”
  • “How about a sacrifice to Kthulu?”
  • “Wha…?”
  • “Just kidding. Software rendering for you.”

That much for “high performance graphics” on Intel… [/QUOTE]
Meanwhile I’ve been quite happily using SetRenderTarget with D3D9 on Intel chips going back to the 915 without a problem.

The annoying thing is that the hardware actually does support hardware accelerated offscreen rendering perfectly well.

In section 1.2.1 of the GLSL 3.3 spec (Summary of Changes from Version 1.50) it says
“Added Appendix A to describe include tree and path semantics/syntax for both the language and the API specifications.”
This appendix or any other information does not appear to be in the GLSL spec or the GL 3.3 spec. The related extension (ARB_shading_language_include) says
“We decided not to put #include into OpenGL 3.3 / 4.0 yet”

In a word, “SWEET!” I love the new direction the ARB has taken with OpenGL! Keep it coming.

BTW as for setup of GL4.0, I haven’t read the spec, but I am assuming its no different to setup than GL3.2 and similar usage vs. GL3.2?

Thanks

Great step forward! I just hope drivers will implement all these features reliably. A spec conformity test suite (à la ACID tests for browsers) would be extremely useful for this.

I know at least 6 developer at my co. that want the ability to separate shader objects and have a binary shader format. Maybe the shader subroutines will help, depending on their performance.

DSA would be a nice to have, but not imperative since we wrapped all the object binding logic in classes.

Command lists as BarnacleJunior suggested would also be very useful. They would allow to get the maximum efficiency in the OpenGL draw thread since it would only execute a compiled list of OpenGL commands; kind of like a display list for each frame or each part of a frame.

I agree to look in the future now. OpenGL vs dx9 on dx9 class hardware (most intel integrated stuff) was clearly lost, FBO came too late, GLSL was also a bit dogy compared to the dx9 sm3 (and even compared to the arb program extensions).
So one should not try to fix the past up, that’s just too much legacy, and not worth the effort. But for the sm4+ hardware things look different now, with both apis very close feature wise, and one being able to expose that functionality on all platforms, including win xp.

I am not sure what the mobile guys work on, but given the lean nature of the “core” profiles, I would think that GL ES might not be needed anymore, for the next-gen mobile stuff.

out of curiosity, is there a clear benefit of the “link” mechanism GLSL has (vs the dx like individual shaders) for the IHVs? In theory additional optimization could be done, but is this really being made use of?

To all those requesting DSA: write wrapper classes for OpenGL resources and you have DSA. Works great when done well.

To all those requesting anything from OpenGL: write your own software renderer and you have it. Works great when done well.

Binding/state system have no benefits. It is a minor problem for IHV and major problem for game programmers.
It was especially awful in FF days where every routine had to look like:

glBindALotOfThings();
glSetALotOfStates();
glDoSomethingUseful();
glSetEverythingBack();

to ensure that 2 different modules won’t overwrite their settings. Now with shaders (no state machine there!) and VAO and other fancy stuff there aren’t as much “binding places”, but for example binding textures (now with additional sampler objects) and UBOs are still cumbersome.

Less API calls == better performance == profit.

Randall, do you know what does DSA mean? The point is that…

The intent of this extension is to make it more efficient for libraries to avoid disturbing selector and latched state.
… and you suggest to make a wrapper…

Glad to hear GL spec 4.0 is out.

“Functional” Drivers?

Intel graphics?

Are extensions like:

http://www.opengl.org/registry/specs/ARB/draw_buffers_blend.txt
http://www.opengl.org/registry/specs/ARB/sample_shading.txt

going to be modified to remove ARB suffix from tokens + entry points? (Otherwise the headers will need to include all these again, without the ARB suffix)

Can you elaborate on “dropped” ?

DX11 cards still need to be able to run DX9/DX10 software, so I don’t see how this feature could be cut from silicon unless it has simply become another programmable behavior masquerading as FF behavior… or do you mean that it’s just not in the DX11 API any more. [/QUOTE]

I blame my bad memory for making me think that some additional restrictions introduced in DX10.1 meant that dual source blending is getting the shaft. :o

Well said, Aleksandar. DSA is about making an API more stream-lined and EFFICIENT. Sure, if you use glGet* and push/pop before EVERY state-change, you can make it work the same way, even today.

But then don’t complain about slow rendering. Multi-threading is then completely impossible for the driver to accomplish.

Jan.

Binding/state system have no benefits.

That’s not why it’s still around.

Less API calls == better performance == profit.

That’s not necessarily true. It can be true, but it certainly doesn’t have to be.

going to be modified to remove ARB suffix from tokens + entry points?

They didn’t do it when ARB_geometry_shader4 was promoted to core, so I doubt they’ll start now.

Core extensions (ARB extensions without the suffix) are something of a nicety. They aren’t 100% necessary, but they’re nice to have when possible. It certainly isn’t worth rewriting an extension specification just to have them, though.

Multi-threading is then completely impossible for the driver to accomplish.

This is probably the best argument for DSA. You can’t have multithreaded rendering without it.

However, the problem is that, even if you use DSA, backwards compatibility means that you don’t have to. What then happens to multithreaded rendering in that case? Does the spec just say, “attempting to call functions X, Y, Z will cause undefined behavior when threading?”

Yes, I know what DSA means. I was not talking about using glGet* and push/pop before EVERY state-change. I know that this would kill performance. I was talking about caching the most important state in app on the CPU side (tracking binding points etc.)

I agree that DSA would be nice and more efficent. But the reality is that we don’t have it in the OpenGL 4.0.

I suggested to create thin layer (wrapper for OpenGL resources) which would “emulate DSA” for non NVIDA hardware and use fast path (DSA) for NV hardware. I have written such abstraction and it works well. So, do not complain. Be happy with OpenGL 4.0. It’s getting better and better.

Very pleasantly surprised by this OpenGL release.
Love the new stuff.

The drawing without cpu intervention is fantastic!
This saves a lot of valuable cpu-cycles.
Makes OpenGL very efficient :slight_smile:
Good to see instancing going further.
Those timer query stuff is going to be really handy.
This makes it able for programs to do a mini benchmark.
Add to this the new shader subroutine flexibility.
It’s going to be possible to write programs that might optimize themselves dynamically at runtime. :smiley: :slight_smile:

The only thing that’s missing is DSA.
+1 for the DSA.

This is a very good release with a lot of nice goodies.
Khronos is really improving OpenGL very well. Kudos for that.

I’m sorry randall for misunderstood.
And, of course, I’m happy with both OpenGL and NV. :wink:

The drawing without cpu intervention is fantastic!
This saves a lot of valuable cpu-cycles.

It’s not there to save performance. What it does do is allow a shader that does transform feedback to decide how to do the rendering with the feedback data by itself.

Good to see instancing going further.

I was actually rather surprised to see them put that form of instancing back in the rendering pipeline. Especially since D3D took it out in version 10 (as I understand it).

The only thing that’s missing is DSA.

coughshader separationcough.

I’m not using tessellation until I can runtime mix and match shaders as I see fit without having to re-link and everything.