Something like 3dsetup would indeed be a good short-term solution to provide people with modern OpenGL drivers. Too bad that Intel isn’t shipping any GL3.x let stand 4.x drivers at this point
Another thing which is VERY related to this. A lot of developers encounter OpenGL driver bugs. Some implementations are buggier than others and this frustrates a lot of developers and it is also one of the reasons that some companies don’t use OpenGL at this point.
In order to improve OpenGL driver quality I would urge developers when they encounter problems, to submit test cases to the ‘piglit’ an opengl testing framework hosted at http://people.freedesktop.org/~nh/piglit/. At least open source X.org OpenGL drivers are using it as a test bed but nothing prevents OSX/Windows developers to use it as well.
That’s too counter-productive, imho. Scenes have 5k+ objects visible, different textures each; fewer programs. It makes more sense to group by program, imho.
In case you meant to bind all N textures for the given mesh-instance at once, I don’t think it’s viable either: those GLuint names are not optimally mappable during shader execution (are not pointers, and should not be).
In section 1.2.1 of the GLSL 3.3 spec (Summary of Changes from Version 1.50) it says
“Added Appendix A to describe include tree and path semantics/syntax for both the language and the API specifications.”
This appendix or any other information does not appear to be in the GLSL spec or the GL 3.3 spec. The related extension (ARB_shading_language_include) says
“We decided not to put #include into OpenGL 3.3 / 4.0 yet”
Great step forward! I just hope drivers will implement all these features reliably. A spec conformity test suite (à la ACID tests for browsers) would be extremely useful for this.
I know at least 6 developer at my co. that want the ability to separate shader objects and have a binary shader format. Maybe the shader subroutines will help, depending on their performance.
DSA would be a nice to have, but not imperative since we wrapped all the object binding logic in classes.
Command lists as BarnacleJunior suggested would also be very useful. They would allow to get the maximum efficiency in the OpenGL draw thread since it would only execute a compiled list of OpenGL commands; kind of like a display list for each frame or each part of a frame.
I agree to look in the future now. OpenGL vs dx9 on dx9 class hardware (most intel integrated stuff) was clearly lost, FBO came too late, GLSL was also a bit dogy compared to the dx9 sm3 (and even compared to the arb program extensions).
So one should not try to fix the past up, that’s just too much legacy, and not worth the effort. But for the sm4+ hardware things look different now, with both apis very close feature wise, and one being able to expose that functionality on all platforms, including win xp.
I am not sure what the mobile guys work on, but given the lean nature of the “core” profiles, I would think that GL ES might not be needed anymore, for the next-gen mobile stuff.
out of curiosity, is there a clear benefit of the “link” mechanism GLSL has (vs the dx like individual shaders) for the IHVs? In theory additional optimization could be done, but is this really being made use of?
to ensure that 2 different modules won’t overwrite their settings. Now with shaders (no state machine there!) and VAO and other fancy stuff there aren’t as much “binding places”, but for example binding textures (now with additional sampler objects) and UBOs are still cumbersome.
DX11 cards still need to be able to run DX9/DX10 software, so I don’t see how this feature could be cut from silicon unless it has simply become another programmable behavior masquerading as FF behavior… or do you mean that it’s just not in the DX11 API any more. [/QUOTE]
I blame my bad memory for making me think that some additional restrictions introduced in DX10.1 meant that dual source blending is getting the shaft. :o
That’s not necessarily true. It can be true, but it certainly doesn’t have to be.
going to be modified to remove ARB suffix from tokens + entry points?
They didn’t do it when ARB_geometry_shader4 was promoted to core, so I doubt they’ll start now.
Core extensions (ARB extensions without the suffix) are something of a nicety. They aren’t 100% necessary, but they’re nice to have when possible. It certainly isn’t worth rewriting an extension specification just to have them, though.
Multi-threading is then completely impossible for the driver to accomplish.
This is probably the best argument for DSA. You can’t have multithreaded rendering without it.
However, the problem is that, even if you use DSA, backwards compatibility means that you don’t have to. What then happens to multithreaded rendering in that case? Does the spec just say, “attempting to call functions X, Y, Z will cause undefined behavior when threading?”
Yes, I know what DSA means. I was not talking about using glGet* and push/pop before EVERY state-change. I know that this would kill performance. I was talking about caching the most important state in app on the CPU side (tracking binding points etc.)
I agree that DSA would be nice and more efficent. But the reality is that we don’t have it in the OpenGL 4.0.
I suggested to create thin layer (wrapper for OpenGL resources) which would “emulate DSA” for non NVIDA hardware and use fast path (DSA) for NV hardware. I have written such abstraction and it works well. So, do not complain. Be happy with OpenGL 4.0. It’s getting better and better.
Very pleasantly surprised by this OpenGL release.
Love the new stuff.
The drawing without cpu intervention is fantastic!
This saves a lot of valuable cpu-cycles.
Makes OpenGL very efficient
Good to see instancing going further.
Those timer query stuff is going to be really handy.
This makes it able for programs to do a mini benchmark.
Add to this the new shader subroutine flexibility.
It’s going to be possible to write programs that might optimize themselves dynamically at runtime.
The only thing that’s missing is DSA.
+1 for the DSA.
This is a very good release with a lot of nice goodies.
Khronos is really improving OpenGL very well. Kudos for that.