NVIDIA releases OpenGL 3.3 drivers

I’m using 197.15, and I’m not sure if the following is expected or not.

I’m calling glGetActiveUniformBlock with GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES to get a list of active uniforms in the uniform block, then passing that list to glGetActiveUniformsiv to get the type and offset. The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn’t referenced in the shader.

layout(shared) uniform PerMaterial
{
   vec3 ambientColour;
   vec3 diffuseColour;
   vec3 specularColour;
   float specPower;
};

Is this expected behaviour?

Thanks & Regards
elFarto

The issue is that the glGetActiveUniformsiv calls error with GL_INVALID_VALUE if one of the uniforms in the block isn’t referenced in the shader.

That’s contrary to the spec. Because you gave it a “shared” layout, the uniforms are all automatically considered active. Otherwise sharing wouldn’t work.

Good, that’s what I thought should happen.

I’ve had it confirmed that this as expected on ATI cards.

Regards
elFarto

This is a known bug, see here:
http://www.opengl.org/discussion_boards/…true#Post274302

That’s correct. It has already been fixed and will appear in our next OpenGL driver release.

As far as I can tell, an implementation is not required to report unreferenced uniforms in “shared” uniform blocks as active. It is required to allocate buffer storage for unreferenced uniforms because the same block layout may be used by other shaders where those uniforms are active. NVIDIA’s drivers currently treat all such uniforms as active, but that isn’t required by the spec.

Regardless, there still was a driver bug in this case, as the driver was assigning and reporting an active uniform index for the unreferenced uniform but then rejecting that index in glGetActiveUniformsiv().

It would not be a bug if a driver handled such uniforms by returning INVALID_INDEX in glGetUniformIndices() and not enumerating them in glGetActiveUniformBlocks(…, GL_UNIFORM_BLOCK_ACTIVE_UNIFORM_INDICES, …).

I will really love to see it in the next openGl driver release

Any hope of having “precise” qualifier outside of GPU_EXT_shader5 extension? it’s not a gpu feature only a compiler fature…
i.e. for not only fermi and cypress gpus I want in gt200 for example…
it’s not good since double precision emulation on d3d10 gpus using
float-float approaches gets optimized by Nvidia compiler!
Example code optimized:

vec2 dblsgl_add (vec2 x, vec2 y)
{
precise vec2 z;
float t1, t2, e;

t1 = x.y + y.y;
e = t1 - x.y;
t2 = ((y.y - e) + (x.y - (t1 - e))) + x.x + y.x;
z.y = e = t1 + t2;
z.x = t2 - (e - t1);
return z;
}

vec2 dblsgl_mul (vec2 x, vec2 y)
{
precise vec2 z;
float up, vp, u1, u2, v1, v2, mh, ml;

up = x.y * 4097.0;
u1 = (x.y - up) + up;
u2 = x.y - u1;
vp = y.y * 4097.0;
v1 = (y.y - vp) + vp;
v2 = y.y - v1;
//mh = __fmul_rn(x.y,y.y);
mh = x.y*y.y;
ml = (((u1 * v1 - mh) + u1 * v2) + u2 * v1) + u2 * v2;
//ml = (fmul_rn(x.y,y.x) + __fmul_rn(x.x,y.y)) + ml;

ml = (x.yy.x + x.xy.y) + ml;

mh=mh;

z.y = up = mh + ml;
z.x = (mh - up) + ml;
return z;
}

Seriously, you don’t need to post this everywhere. The GL suggestions forum was fine.

ok sorry

Hi,

I’ve started to try some stuff out in 3.3 but have fallen at the starting blocks. I can’t seem to create a 3.3 context :frowning:

I’m working on a Win7 64bit PC, with an NVIDIA GeForce 9600GT and 197.45 drivers. I can set up 3.0, 3.1 and 3.2 contexts without any problems.

After calling wglCreateContext glGetString returns only a 3.2 version, if i try to create a 3.3 context with wglCreateContextAttribsARB it returns NULL.

Any ideas what could be wrong?

James, you need driver 197.44 (not 197.45). You can get that here:

http://developer.nvidia.com/object/opengl_driver.html

Regards,
Barthold
(with my NVIDIA hat on)

Thank seems to have done the job, thank you very much :slight_smile: