Hi, I am just learning OpenGLES 2.0, and I am trying to use an array of color values for my vertex colors. Rather than use floats I wanted to use unsigned chars for the RGB values, to save memory.
It seems to work but the shader compile is producing this warning:
WARNING: 0:25: Overflow in implicit constant conversion, minimum range for lowp float is (-2,2)
[FONT=Verdana]Here is the shader code I am using. I assume that the divide by 255 is producing the warning? What is the correct way to do this? Surely passing floats for colors is not a better option?
First thing I want to say: don’t worry about optimization until you have it working already.
That aside, shaders aren’t really my strong point, but from what I can see, the line with the colorVarying assignment looks a bit messy.
You’re passing in attribute lowp vec4 color. vec4 is a vector of four floats, and if it’s lowp, the range is -2 to 2. I’m pretty sure that when you pass it unsigned bytes via glVertexAttribPointer (make sure you use parameter GL_UNSIGNED_BYTE), it’s already converted to lowp float.
You divide this small float by 255.0 (definitely not a lowp float) then multiply by nDotVP, which, even its value is probably small enough, you defined as float, not lowp float.
I think if you drop the division and make sure nDotVP is a lowp float, it will work.
It works, except that it gives me that warning. Yes I am passing unsigned chars via the attrib pointer call, but how do I specify that in the shader? I was thinking that the ‘lowp’ would indicate bytes instead of floats, but maybe that is wrong.
Because the RGB values range from 0 to 255, I’ve got to divide by 255 to get them in the 0.0 - 1.0 range that the frag shader wants. Without the divide, I end up with color values on the range 0.0 to 255.0, which is basically always white because anything over 1 is treated that way.
Is that theoretical, or have you actually tested it? In my experience, OpenGL converts it before it ever gets to the shader; that’s why you have to specify GL_UNSIGNED_BYTE for glVertexAttribPointer, otherwise it will convert incorrectly. In your case, it’s converting from normalized unsigned byte/char (8 bits, 0x00 to 0xFF) to lowp vec4/float (8 bits, 0.0f to 1.0f).
You say that if you don’t divide, you’ll have values up to 255.0, but if your GLSL compiler is taking the hint (which it should if you’re using OpenGL ES 2.0), you can only fit values from -2.0f to 2.0f anyway.
EDIT: I just remembered something that sounds like it would cause your problem. Are you calling