I just want to say thank you to the people who made the GL 4.1 specification, and that it happened so quickly. A major thank you for putting separate shader objects in! The following is not a request for features, but just in the spec:
There are a lot of shaders running around now, I would love to see a diagram of the “pipeline” of such in the GL specification or maybe on the reference card? Something that is in GL_ARB_tessellation_shader.txt in the answer to Issue (1).
If we use transform feedback or glBindAttribLocation(is it a really problem?) and maybe a couple of other linking related cases… we can’t use separate shader or with a big work around. As I am I Siggraph I didn’t had time to actually have a deep look at everything so I might have missed something here.
If we use transform feedback or glBindAttribLocation(is it a really problem?) and maybe a couple of other linking related cases… we can’t use separate shader or with a big work around. As I am I Siggraph I didn’t had time to actually have a deep look at everything so I might have missed something here.
I don’t see any unpleasant interactions with either of these. The only potential problem is if you want to use glCreateShaderProgram all the time. Since it goes directly from source strings to linked program object, there’s no chance to call any of the pre-link settings functions. However, separate shader objects doesn’t require that you use glCreateShaderProgram; it’s simply a convenience function for those who’s needs are simple enough to allow them to use it.
It would however be great if there were a version of glCreateShaderProgram that didn’t perform compiling or linking. That it returned some kind of object that you could call the pre-link settings functions on and the call another function to do the compiling and linking.
Either that, or set up transform feedback parameters in the shader source somehow.
One other thing I wonder: would if a reasonable to release a project with just the shader binaries? For example: If a build an OpenGL 3.3 software with GLSL 3.3 shaders, could we just have one binary for AMD and one for nVidia? Would there still work over time… with newer drivers (and updated GLSL compilers)?
If not, is the use case scenario is more: build once when we install the software or build when the shader binary load fail for any reason (updated drivers, updated graphics card, …)?
Yes! That is the kind of diagram I am looking for… this looks familiar though, was it in a power point or something when GL4 was first released (just in March)?
Though, it is not exactly what I am begging for, but so close that chances are I am getting greedy:
What I would like, and the difference I am asking is really, really tiny is like this [so tiny I feel almost ashamed to ask].
For each shader stage:
explicit arrows (that pdf has this in the long line broken into 4 bits og pages 2 and 3), but the difference being that the arrows are marked as in, out, in patch, out patch, etc. The other bit (and that pdf has this too to some extent) is “something” when one of the optional shaders are not part of a GLSL program. Being so shader oriented, a diagram without the compatibility pipeline too.
I’d imagine the picture I am after might be more pages though and one can make a pretty strong case that what I am asking for is just a tiny, tiny (epsilon) difference to than what you just gave. The core of what I am begging for is some text for the arrows but the text being what one writes in GLSL (and to a lesser extent GL)…
The latter. The binary shader approach in GL 4.1 is not a distribution format. It enables you to cache a compiled shader for re-loading at a later time on the same machine. OpenGL is free to deny that request for any reason in which case you would need to resubmit source to compile the shader (and then you could re-query and re-save the binary).
great work, with the separate shaders and binary shaders finally around! Good to see the match of dx11 now, I guess the only major thing missing is the threaded resource manipulation.
I am building glew bymyself from the SVN. It lackes 4.1 core functions, and I had to disable ARB_cl_event, as well as manually tewak the debug output callback. I’m fine with that, since I only care about the debug functionality
Also, is there a way to find out whether a context was created with the debug bit set?
It seems that glGetInteger(GL_CONTEXT_FLAGS) will only return the GL_CONTEXT_FLAG_FORWARD_COMPATIBLE_BIT (0x01) (which has a different value than WGL_CONTEXT_FLAG_FORWARD_COMPATIBLE_BIT (0x02), so I cannot use the WGL_CONTEXT_DEBUG_BIT_ARB (0x01))
I am currently hacking QT to create a debug context, where they use GL_CONTEXT_FLAGS to see whether the requested context matches what they wanted to.
I think this might be also useful in cases where a middleware/ library has some debugging facilities, but doesn’t control context creation.
Yes I have shown those slides in the past, and tweaked them a bit. If you want to give it a shot and improve upon them, I’ll be happy to mail you the ppt file. The whole rasterization stage and below still needs to be added as well.
And new reference pages are also a good thing. Good job!
But now I can see another valuable feature (apart from – of course – including DSA into core – or maybe even removing all pre-DSA functions).
Maybe reference pages should be extended with (perhaps somehow standardised) debug_output errors? It would be much easier than relying only on INVALID_VALUE and others error enums.