Shading Egde Anomaly

Posted Below are my two shaders that encompass all the coloring. The UV Tex Coords contain a color value that I use to pull out from a 1D Texture. If anyone can help me understand why I am getting rainbow edging around my drawing it would greatly appreciated.

My radius of involvement is 10.0, the “rainbow edging” means that at the edge of these vertices there is a stripe of rainbow coloring that I cannot seem to get rid of.

Thanks to anyone and everyone who lends a hand to help out on these forums.

http://s592.photobucket.com/albums/tt8/lacabos/?action=view&current=phantomline.png

Vertex Shader//
/////////////
varying vec3 normal;
varying vec4 pos;

uniform vec3 MyPoint[5]; // ** CHANGE ARRAY SIZE TO ELECTRODES ** //

uniform float activationSize; // Activation Modulation from 0.0 being no Activity 1.0 being full radial distance

varying float dist; // Compute Distance from Electrode point and Pass Value to Fragment Shader

varying float radialdistance;

attribute short nodelocations[5];

float distancefinder(vec3 Point[5]) // ** CHANGE ARRAY SIZE TO ELECTRODES ** // change to float return type
{
int i;
float tempDist;

for (i=0;i<5;i++) // Change < to the Size of Electrodes //
{
tempDist = (distance(Point[i],gl_Vertex));
tempDist /= radialdistance;
if(tempDist <= 1.0)
{
return(tempDist);} // works

}
return (radialdistance);
}

void main()
{

radialdistance = 45.5; // Change to Determine Removal of Small Spotting keep consitent with Model.cpp contribution Var

normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
pos = gl_ModelViewMatrix * gl_Vertex;

gl_TexCoord[0].x = gl_MultiTexCoord0;
if (gl_TexCoord[0].x <= 1.0)
{
if (distancefinder(MyPoint) >= 1.0)
{
gl_TexCoord[0].x = radialdistance;
} // Works
}

}

Frag Shade
////

varying vec3 normal;
varying vec4 pos;

varying float dist;

uniform float activation; // color range between 0 - 1

uniform sampler1D texturecolors;

varying float radialdistance;

void main() {

vec4 color;

if (gl_TexCoord[0].x > 1.0 && gl_TexCoord[0].x <= 5) // To Fade to White
{
color[0] = 1.0;
color[1] = 1.0;
color[2] = 1.0;
color[3] = 0.0;
}

else if (gl_TexCoord[0].x <= 1.0) // gl_TexCoord[0].x @ 1.0 is 100%,
{
vec4 colormap = tex1D(texturecolors, ((gl_TexCoord[0].x)*activation));
color[0] = colormap[0];
color[1] = colormap[1];
color[2] = colormap[2];
color[3] = 0.0;
}
// ABOVE IF STATE gl_TexCoord[0].x >= 0.280 && gl_TexCoord[0].x <= 1.0
else
{ color = gl_FrontMaterial.diffuse; } // Comment to Make Brain Black

vec4 matspec = gl_FrontMaterial.specular;
float shininess = gl_FrontMaterial.shininess;
vec4 lightspec = gl_LightSource[0].specular;
vec4 lpos = gl_LightSource[0].position;
vec4 s = -normalize(pos-lpos);

vec3 light = s.xyz;
vec3 n = normalize(normal);
vec3 r = -reflect(light, n);
r = normalize®;
vec3 v = -pos.xyz;
v = normalize(v);

vec4 diffuse = color * max(0.0, dot(n, s.xyz)) * gl_LightSource[0].diffuse;
vec4 specular;
if (shininess != 0.0) {
specular = lightspec * matspec * pow(max(0.0, dot(r, v)), shininess);
} else {
specular = vec4(0.250, 0.250, 0.250, 0.0);
}

gl_FragColor = diffuse + specular;
}

I’d debug the pixels: draw into 4 RGBA32f render-targets at once, use glReadPixels() to fetch.

gl_FragData[0] = diffuse+specular;
gl_FragData[1] = gl_TexCoord[0];
gl_FragData[2] = color;
gl_FragData[3] = vec4(dist,radialdistance,0,0);

Can’t get to your image right now (firewall), but your description makes me think of this:

Might make sure that’s not it.

Thanks for the replies.

Ilian Dinev - By using glReadPixels() I will be able to determine what variable is coloring incorrectly? I haven’t debugged pixels before so I apologize if this is a simple question.

Dark Photon - I’m pretty sure that this could be the error. The picture appears similar to mine and it would make sense why extrapolation would make small banding around the coloring. I’m not sure how I can fix the error since when I try using #version 120 my shader does not work. Do you know how I can implement version 1.2 so I could try the centroid approach?

Thanks you two both answers have already helped out a tremendous amount.

The idea is to identify which part of the pixel shader or which of its inputs is wrong. When you look at the result of the ReadPixels image, you will be able to see which intermediary values are wrong and are the cause of the final (wrong) result.

If you don’t have the necessary framework in your app for multiple fragdata outputs, you can also simply draw one at a time in the regular framebuffer and watch the result in your rendering window. Just output some intermediary shading results in gl_FragColor and see where they do not match what you expect.

Just did some debugging with the pixel shader. Thanks for the previous tip on what ReadPixels() was intended to do. The error looks very similar to the link provided previously regarding centroid. The data being read from my variables are correct, but no matter what color I apply, the outside edge of all the pixels end up in a complete spectrum of my 1D Texture lookup table.

Could this be the error? I added an if statement to determine the value of data, thinking the high values at the edge could be changed a different color to find error. To my surprise, the outside band of the tiny band that is wrong was changed, but still all the colors that are drawn appear in the rainbow band.

Any ideas? Thanks for the information thus far it has been very helpful.

I changed the color values to be extracted from the texture table from fragment to vertex.
Now it is no longer a rainbow stripe, but a faint gray/white. I attempted to change the color table so the highest values were only white, this seemed to make the line slightly fainter or no change at all.

Any ideas or suggestions will be very appreciated.

Picture of Rendering - http://s592.photobucket.com/albums/tt8/lacabos/?action=view&current=faintline.jpg

Try using tex1DLOD(,0); Derivatives might be causing problems.

Ilian,

Attempted to use tex1DLOD(,0) but to no avail. Shader will not compile. Using the cgc compiler in cmd, the error is that the variable is undefined “tex1DLOD”

Any further ideas? I must be making a small error using the texture, possible when I am initializing my 1D texture array? But I do not know where or what.

Thanks for the posts everyone.

If you did that literally, be aware the function name is texture1DLod( sampler, coord, lod ). See the specs here.

Photon,

I tried the correct spec, wasn’t aware it was texture1DLod. Did not compile when I added correct call.

BUT, I did resolve issue on my own. When the coord I passed in was used in the tex1D function to reference texture coordinate apparently if the coord argument is 0, undefined results occur. I had no idea 0 was not “mapped” but I added and

if (dist == 0)
{ dist = 0.002;}

Works perfect.

Additionally, I tried compiling this GLSL shader/openGL program on a ATI Radeon HD 3400 on a Lenovo T400 and it would not work? The GLSL shader breaks down when I use :

dist = gl_MultiTexCoord0;

and when I use the

tex1D(texture,coord);

Any clues?

This sounds like maybe pulling in border pixel values. Try:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

and pull out that dist fudge.

Recall that texcoords actually run to texel edges, while data is conceptually stored at the centers. That is, your data doesn’t run from texcoord 0…1, it runs from texcoord 0.5/N…(N-0.5)/N (or if you’re only using M cells out of a texture size N, then it’s: 0.5/N…(M-0.5)/N. IIRC, anything outside this pulls in border for GL_CLAMP, unless you set GL_CLAMP_TO_EDGE.

Photon,

Unbelievable! Works 100% with your suggestion in changing the Parameter.

The dist fudge was pretty hacky but it felt good to just see it the way it SHOULD look.

Thank you again for helping me out, it really makes learning a new language incredibly easier and less frustrating when getting a useful, intelligent answer from an experienced developer.

Do you have any ideas about why this shader won’t compile on a t400 or t500 AND any topics that relate to vertex attributes. I’m trying to map a set of points (1,2,3,4) to each vertex so then I may pass a uniform array containing all these points “activations” and have the vertex “know” which ones are used.

I think your shader won’t compile on ATi, because of uniform-arrays:


for(i=0;i<5;i++){ // a "for" loop often crashes on ati
   tempDist = (distance(Point[i],gl_Vertex)); // a Point[i] is problematic

Ilian,

I checked out which lines were the ones that couldn’t compile on the ATi cards and it ended up being these two lines

gl_TexCoord[0].x = gl_MultiTexCoord0;

and

colors = tex1D(texturecolors, (dist*activationSize));

Any clues?
I didn’t use the function with the uniform-array but that will give it compilation errors your saying, too?

Arrays generally are bad on ATi drivers, thus gl_TexCoord[0] might be problematic even if it’s a standard thing.

tex1D is a Cg keyword that just happens to be supported in nVidia’s GLSL compiler (which is also a Cg compiler). The correct function is “texture1D”.

When you get shaders which don’t compile, try to look at the output of the shader info log. It often contains information indicating the errors in the shader (like a C compiler).

For this particular line,
gl_TexCoord[0].x is a float (single element of a vec4)
gl_MultiTexCoord0 is a vec4
You can’t assign directly a vec4 to a float. Try with “gl_MultiTexCoord0.x

Ilian and bertgp,

Both suggestions are correct. When I changed both the shader ALMOST compiles perfectly. The mistakes you suggested were correct but the ATi compiler does not like the attribute float values[ ]

Error message in InfoLog is:

ERROR: 0:13: ‘attribute float’ : cannot declare arrays of this qualifier

Is using attribute still correct? The code compiles on my nVidia card fine.

Thanks, again, for the helpful suggestions. Once I fix this attribute error my program will be done! Except for getting it to work in Borland, but one step at a time.

From the GLSL 1.20 spec p.23:

Attribute variables cannot be declared as arrays or structures

There’s your problem.

void VertexAttribIPointerEXT(uint index, int size, enum type,
sizei stride, const void *pointer);

There is not way to use this function for vertex attribute to pass in a pointer to a list of integers??