I’ve got vertex shader that calculates an index into an array of vec3 as follows:
int index = int( grayValue*255.0 );
vec3 color = colorLUT[ index ];
Where grayValue is a normalized float and colorLUT is the vec3 array.
When compiling and running on ATI, I get the following error “available number of constants exceeded” and the ogl implementation runs in software on the CPU.
Does anyone understand why this is happening, and how to workaround it?
Just to make this clear, when I do the following, I don’t get the error:
vec3 color = colorLUT[ 0 ];
My guess is that in this case the compiler ignores the array values 1-N and doesn’t put them in the resulting assembly. Whereas, when calculating that index it cannot make that simplifying assumption, so it includes all array values and exceeds the constant limit (duh!)
But how can I workaround this? I know that on nvidia I can us a texture fetch for the lookup. But what can I do on ATI?
How large is the table in question?
The LUT is 256 colors, for a total of 768 floats. This likely exceeds the value of GL_MAX_VERTEX_UNIFORM_COMPONENTS_ARB.
I guess this post is just me banging my head against the hardware & software limitations of ATI’s “Smart Shader 2.0”. I ran into the 1024 ALU instruction limit trying to calculate the color in the shader, so I tried a lookup table using a texture fetch (which is unsupported – so I didn’t get very far) and then using an array.
I have since tried moving the texture fetch to the fragment shader, while calculating the index in the vertex shader and passing it on as a varying variable. This strategy worked but produced weird artifacts, which is perhaps a topic for a different post.
I guess I’m hoping there is some magic way to workaround the limitations of “Smart Shader 2.0”
Well, on the R300 you only have 192 vertex shader constants (192 * vec4). On R420 you have 256. Your best option is to simply reduce the size of the table if possible.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.