I’m using 197.15, and I’m not sure if the following is expected or not.
I’m calling glGetActiveUniformBlock with GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES to get a list of active uniforms in the uniform block, then passing that list to glGetActiveUniformsiv to get the type and offset. The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn’t referenced in the shader.
The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn’t referenced in the shader.
That’s contrary to the spec. Because you gave it a “shared” layout, the uniforms are all automatically considered active. Otherwise sharing wouldn’t work.
That’s correct. It has already been fixed and will appear in our next OpenGL driver release.
As far as I can tell, an implementation is not required to report unreferenced uniforms in “shared” uniform blocks as active. It is required to allocate buffer storage for unreferenced uniforms because the same block layout may be used by other shaders where those uniforms are active. NVIDIA’s drivers currently treat all such uniforms as active, but that isn’t required by the spec.
Regardless, there still was a driver bug in this case, as the driver was assigning and reporting an active uniform index for the unreferenced uniform but then rejecting that index in glGetActiveUniformsiv().
It would not be a bug if a driver handled such uniforms by returning INVALID_INDEX in glGetUniformIndices() and not enumerating them in glGetActiveUniformBlocks(…, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, …).
Any hope of having “precise” qualifier outside of GPU_EXT_shader5 extension? it’s not a gpu feature only a compiler fature…
i.e. for not only fermi and cypress gpus I want in gt200 for example…
it’s not good since double precision emulation on d3d10 gpus using
float-float approaches gets optimized by Nvidia compiler!
Example code optimized:
I’ve started to try some stuff out in 3.3 but have fallen at the starting blocks. I can’t seem to create a 3.3 context
I’m working on a Win7 64bit PC, with an NVIDIA GeForce 9600GT and 197.45 drivers. I can set up 3.0, 3.1 and 3.2 contexts without any problems.
After calling wglCreateContext glGetString returns only a 3.2 version, if i try to create a 3.3 context with wglCreateContextAttribsARB it returns NULL.