Migrating from XP to Win7x64 - UBO error

Hi,

After migration from XP x86 to Windows 7 x64 (nvidia driver 197.45) my aplication is no longer working correctly.
When creating Vertex Shader I got such error log during compilation:


Vertex info
-----------
0(36) : error C5058: no buffers available for bindable uniform 
....
0(36) : error C5058: no buffers available for bindable uniform

This is exactly 245 times the same error line.
And this is my shader code:


#version 150   
uniform samplerCube heightmap; // Height value
in      vec2        mPosition; // Vertex position                 
in      vec3        mWeights;  // Vertex weights
        
out     vec3        fNormal;   // Vertex normal and texcoord

// Whole planet specyfic data
// (shared across all sections of program)
layout(std140) 
uniform Planet
        {
        mat4  matMVP;     // Model-View-Projection matrix for planet     
        vec3  correction; // Unit sphere center correction           
        uint  level;      // Teselation swith off sector level
        float r;          // Planet radius
        float multiplier; // Planet height multiplier
        } planet;

// Sector specyfic data array
layout(std140) 
uniform Sectors 
        {
        vec4 v0;          // [v0.nx][v0.ny][v0.nz][ lvl ]
        vec4 v1;          // [v1.nx][v1.ny][v1.nz][-----]
        vec4 v2;          // [v2.nx][v2.ny][v2.nz][-----]
        uint lvl;
        } sectors[256];
       
void main(void)                            
{      
// Corners positions, normals, texcoords
vec3  n0 = sectors[gl_InstanceID].v0.xyz;
vec3  n1 = sectors[gl_InstanceID].v1.xyz;
vec3  n2 = sectors[gl_InstanceID].v2.xyz;

// Sector level of subdivision
uint lvl = sectors[gl_InstanceID].lvl; //v0.z;

// Read weights for interpolation
float w0 = mWeights.x; 
float w1 = mWeights.y;
float w2 = mWeights.z;

// Linear interpolate texcoords and normals
fNormal  = normalize(w0*n0 + w1*n1 + w2*n2);

// Sampling terrain height in vertex position
float height   = texture(heightmap,fNormal).x * planet.multiplier;
vec3 elevation = fNormal.xyz * height; 

vec4 base_position;

if (lvl < planet.level)
    base_position = vec4(fNormal.xyz, 1.0);
else
    {
    mat4 sectorMatrix;
   
    vec3 rb = vec3(n2 - n1); 
    vec3 rc = vec3(n1 - n0); 

    sectorMatrix[0] = vec4(rc, 0.0);
    sectorMatrix[1] = vec4(rb, 0.0);
    sectorMatrix[2] = vec4(cross(rb,rc),  0.0);
    sectorMatrix[3] = vec4(n0, 1.0); 

    base_position = sectorMatrix * vec4(mPosition.x, mPosition.y, 0.0, 1.0);
    }

gl_Position = planet.matMVP * (base_position + vec4(elevation - planet.correction, 0.0));
};

I tried to change sectors table size to smaller for eg. 128, 10, 1 but always got this error.

Did anyone of you guys got similiar problems?
Any clues what could changed between these drivers ?
(or maybe something is wrong in shader ?)

Karol

Interesting. Shader looks ok for me. This may be a driver bug. Try different driver versions.

When I pasted this section to Fragment Shader:


// Sector specyfic data array
layout(std140) 
uniform Sectors 
        {
        vec4 v0;          // [v0.nx][v0.ny][v0.nz][ lvl ]
        vec4 v1;          // [v1.nx][v1.ny][v1.nz][-----]
        vec4 v2;          // [v2.nx][v2.ny][v2.nz][-----]
        uint lvl;
        } sectors[256];

I got identical error log for fragment shader.
It looks like driver interpreters this declaration:


...
} sectors[256];

NOT as Uniform Buffer containing ARRAY but as Array of Uniform Buffers.
Error log mentioned earlier occurs on program linking level (shader object compilation isn’t returning errors).

I have also tested 197.44 developers drivers with ogl 3.3 core context (earlier 3.2 core).
And the results are the same.

Could you upload your application file?

Yes, that is exactly the problem. From the GLSL 4.00 spec (p. 41):

For uniform blocks declared as an array, each individual array element corresponds to a separate buffer
object backing one instance of the block. As the array size indicates the number of buffer objects needed,
uniform block array declarations must specify an array size. Any integral expression can be used to index
a uniform block array, as per section 4.1.9 “Arrays”.

To get an array of structures in a single buffer, you need to do something like:


struct Sector
        {
        vec4 v0;          // [v0.nx][v0.ny][v0.nz][ lvl ]
        vec4 v1;          // [v1.nx][v1.ny][v1.nz][-----]
        vec4 v2;          // [v2.nx][v2.ny][v2.nz][-----]
        uint lvl;
        };

layout(std140) 
uniform Sectors 
        {
        Sector sectors[256];
        };

I’m pretty sure that some earlier pre-4.0 NVIDIA drivers incorrectly interpreted this construct as an array of structures within a single buffer, which is presumably what you wanted here. I ran into this problem when exercising indexing into an array of buffers, which is a new feature in OpenGL 4.0. In OpenGL 3.3, you could declare an array of buffers, but you had to use a constant integer expression to index into it.

I’m sorry that this bug caused you problems.

EDIT: Remove the incorrect cut-and-pasted layout() qualifier on the struct declaration in my new example.

Hi,

Yes people on nvidia site also confirmed that :).

by Chris Dodd:

Yes, this declaration does declare an array of uniform buffers (and not an array in a single uniform buffer) – earlier drivers had a ‘bug’ here in that if the array was too big (more than the max 12 uniform buffers or whatever it was), they would treat it as an array in single buffer, but that violates the GL spec.

If you want an array in a single buffer, you need to define this as an array of structs all inside a single buffer with no index on the buffer itself.

My solution is now:


...
// Sector specyfic data array
layout(std140) 
uniform Sectors 
        {
        struct Descriptor
               {
               vec4 v0;          // [v0.nx][v0.ny][v0.nz][ lvl ]
               vec4 v1;          // [v1.nx][v1.ny][v1.nz][v2.tx]
               vec4 v2;          // [v2.nx][v2.ny][v2.nz][v2.ty]
               vec4 t0t1;        // [v0.tx][v0.ty][v1.tx][v1.ty]
               uint lvl;
               } tab[256];
        } sectors;
       
void main(void)                            
{      
// Corners positions and normals
vec3  n0 = sectors.tab[gl_InstanceID].v0.xyz;
vec3  n1 = sectors.tab[gl_InstanceID].v1.xyz;
vec3  n2 = sectors.tab[gl_InstanceID].v2.xyz;
...

And now it works. It really has sens for me now. I was always wondering why array of struct should be declared in such way (the old one). Problem I still see is that such method of using instancing is shown in a lot of tutorials and people think it is correct form :p.

Thanks a lot :).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.