Procedural shading and drawing the pixar ball

I followed the red book 9th edition code, which has the complete code for the vertex shader and the fragment shader, so I typed them and they seem to be OK except for a uniform mat3 that has the NormalMatrix which is not used by either shader and is not needed since both thye MV and the MVP matrices are passed and the MV is used to compute the normals of the sphere.

I just wrote the main app with the same initial values supplied by the red book, but didn’t get the correct result, I got a ball with ligthing but the star, stripe and base colors are messed up. So I looked closer at the initial values and suspected that there is something wrong with the plane equations defining the star. The planes defining the star were written simply as the x and y coordinates of the 5 points lying in the x-y plane of the regular pentagon drawn inside a unit circle and the z is zero and the w (constant) is same for all planes 0.2. So I changed the initial values of the planes by computing the cross product of the z-axis (assuming all planes are parallel to the z-axis) and 2 non-consecutive points of the pentagon to get the normal of the plane, and then wrote the a,b,c,d of the plane equation from the plane and one of the 2-points. Well the image changed but still messed up. Cannot think of anything else, appreciate help from anyone who got it working…
Here is the code used in computing the new initial value of Halfspace:

vec3 v[5] = {{1.0, 0.0, 0.0}, {0.309016994, 0.951056516, 0.0},
	{-0.809016994, 0.587785252, 0.0} , {-0.809016994, -0.587785252, 0.0},
	{0.309016994, -0.951056516, 0.0}};
vec4 HalfSpace[5];
int pj2;
for (int i=0; i<5; i++) {
	int j1 = (i%2==0)?i/2:(i-1)/2;
	int j2 = (i%2==0)?(j1+2):(pj2+1);
	pj2 = j2;
	HalfSpace[i] = vec4(v[j1][1]-v[j2][1], v[j1][0]-v[j2][0], 0.0, (v[j1][1]-v[j2][1])*v[j1][0] + (v[j1][0]-v[j2][0])*v[j1][1]);
}

I’m not sure if I get you right… If you have a sphere (a 3d entity) you’ve already got the normal direction as the vector from the center to the point. just normalize it. As for a xy-plane (all z=0), you’ve got the normal vector as (0,0,0)->(0,0,1). You can use the inverse-transpose matrix of mv to transform this normal (if memory serves me right).

edit: Remember that a coordinate and a vector usually differs. The vector is a DIRECTION and a LENGTH. You express the direction as a coordinate, WITH RESPECT TO (0,0,0). Matrices works on vectors. If you put a coordinate in, the matrix will understand it as a vector from (0,0,0). As for the normal of a sphere-vertex you have to subtract the center from the vertex to get a valid expression for the direction. That’s a straightforward operation though.

I know that’s standard graphics textbook stuff, but for a sphere centered at the origin that rotates, MV is fine for computing normals it is basically the identity matrix, and that’s was done in the red book code. Besides the normals affect ligthing and I don’t have an issue with the lighting. It is just the star color that does compute correctly, I am having an issue with “in or out” coloring of the star in red that does not seem to work properly and I am not sure of the math.

Well, I figured it out. It is much simpler than I thought. In case anyone else is interested, you simply gets the slopes of lines connecting non-consecutive vertices and get the slopes of the lines perpendicular to them (the product of 2 perpendicular lines is -1), that would give a,b,c (the normal) for the plane equation, and then set the d (the constant parameter) to how far (or big) from the origin our star is. Here is the code:

vec3 v[5] = {{(float)(1.0), (float)(0.0), 0.0f}, {(float)(0.309016994), (float)(0.951056516), 0.0f},
	{(float)(-0.809016994), (float)(0.587785252), 0.0f} , {(float)(-0.809016994), (float)(-0.587785252), 0.0f},
	{(float)(0.309016994), (float)(-0.951056516), 0.0f}};
vector<vec4> HalfSpace(5);
int pj1;
for (int i=0; i<5; i++) {
	int j2 = (i%2==0)?i/2:(i-1)/2;
	int j1 = (i%2==0)?(j2+2):(pj1+1);
	pj1 = j1;
	//HalfSpace[i] = vec4(v[j2][1]*v[j1][2]-v[j1][1]*v[j2][2], v[j1][0]*v[j2][2]-v[j2][0]*v[j1][2], v[j2][0]*v[j1][1]-v[j1][0]*v[j2][1], 0.0);
	HalfSpace[i] = vec4(v[j2][1]-v[j1][1], v[j1][0]-v[j2][0], 0, 0.05);
	//HalfSpace[i] = normalize(HalfSpace[i]);
}

Well, I finally got the program to work, the tricky part is to maintain the origin on the positive side of all planes, here is the code for computing the planes:

 	vec3 v[5] = {{(float)(1.0), 0.0f, 0.0f}, {(float)(0.309016994), (float)(0.951056516), 0.0f},
	{(float)(-0.809016994), (float)(0.587785252), 0.0f} , {(float)(-0.809016994), (float)(-0.587785252), 0.0f},
	{(float)(0.309016994), (float)(-0.951056516), 0.0f}};
vec4 HalfSpace[5];
float scale = 0.5;
int pj1;
int ks=1;
	for (int i=0; i<5; i++) {
	int j2 = i>>1;
	int j1 = ((i&1)==0)?(j2+2):(pj1+1);
	pj1 = j1;
	ks = (~ks)+1;
	HalfSpace[i] = (ks>0)?vec4((v[j1][1]-v[j2][1]), (v[j2][0]-v[j1][0]), 0.0, (v[j1][0]*scale*(v[j2][1]-v[j1][1])+v[j1][1]*scale*(v[j1][0]-v[j2][0]))):
						  vec4((v[j2][1]-v[j1][1]), (v[j1][0]-v[j2][0]), 0.0, (v[j1][0]*scale*(v[j1][1]-v[j2][1])+v[j1][1]*scale*(v[j2][0]-v[j1][0])));
}

Also another bug in the code is computing the normal by passing the transpose of the inverse of the model view matrix, and multiply the coordinates of the sphere - origin by it to get the transformed normal. Other than that the code of the fshader in the red book is fine…
Well, I thought of sharing my experience just in case another novice need it…

Well, I still have a question for the gurus. I had to generate the triangular strips that constitute the sphere with an angle of 0.2 degrees in between latitudes and longitudes to get a satisfactory looking star without jaggies or any artifact. This causes a generation of a large number of vertices which consume memory and causes a noticeable delay at the startup.
My question is: can I generate a satisfactory looking star while generating a decent sphere with an angle of 20 degrees between latitudes and longitudes by handling the fragment coordinates in the pixel shader. I know there is gl_FragCoord which consist of (x, y, z, 1/w). I can’t quite get it straight how is it related to the model coordinates. That is given gl_FragCoord, how can I deduce the corresponding sphere pixel? Multiply by inverse of MVP ?

A fragment shader cannot change the position of a fragment. It’s possible to use techniques such as parallax mapping to make geometry appear more complex than it really is, but you can’t change the geometry’s silhouette.

gl_FragCoord is the fragment’s position in window coordinates. You can convert window coordinates back to NDC by applying the inverse of the viewport and depth range transformations. You can convert NDC to clip space by dividing by 1/w (i.e. by gl_FragCoord.w). You can convert clip space back to any other transformation by applying the inverse of whatever matrix you used to transform that to clip space. But if you actually want clip-space coordinates in the fragment shader, you’re better off just storing the values in a user-defined output variable (as well as in gl_Position) and reading the corresponding input variable in the fragment shader.

Thanks, I found just the right code on Khronos.org. It does what you described. I am including it here for reference to any novice:

viewport = vec4(0.0, 0.0, 500.0, 500.0);
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
clipPos = ndcPos / gl_FragCoord.w;	
pShade = inverse(MVPMatrix) * clipPos;

My viewport which I didn’t pass as a uniform since I don’t handle resize and this is just a test program is the entire window of width=height=500. Also as stated on Khronos, it is a bad idea to compute the inverse of the Projection matrix in the fragment shader, but again this is just a toy testing code.

As for the first part of your answer, what I meant is as the increments of latitudes and longitudes get larger, the vertices of the triangular strips and fans at the poles occur fairly wide apart, and I am not sure how the rasterizer compute the colors. If I set the vertices of some strip to Base color based on gl_Position, then if some part of the star is inside this strip, it will not be colored red as part of the star. Worse yet if some vertices of the strip are in Base Color and the remaining one or two are red (star Color), then I presume the default is interpolating and will get blurring…My idea is by handling fragment coordinates one can avoid such artifacts.

In the last part of my answer, I was referring to the fixed function pipeline. In the programmable shader I set the color based on fragment coordinate rather than vertex coordinate.

The usual way to do that is to assign texture coordinates to vertices then perform a texture lookup based upon those.