# Procedural shading and drawing the pixar ball

I followed the red book 9th edition code, which has the complete code for the vertex shader and the fragment shader, so I typed them and they seem to be OK except for a uniform mat3 that has the NormalMatrix which is not used by either shader and is not needed since both thye MV and the MVP matrices are passed and the MV is used to compute the normals of the sphere.

I just wrote the main app with the same initial values supplied by the red book, but didn’t get the correct result, I got a ball with ligthing but the star, stripe and base colors are messed up. So I looked closer at the initial values and suspected that there is something wrong with the plane equations defining the star. The planes defining the star were written simply as the x and y coordinates of the 5 points lying in the x-y plane of the regular pentagon drawn inside a unit circle and the z is zero and the w (constant) is same for all planes 0.2. So I changed the initial values of the planes by computing the cross product of the z-axis (assuming all planes are parallel to the z-axis) and 2 non-consecutive points of the pentagon to get the normal of the plane, and then wrote the a,b,c,d of the plane equation from the plane and one of the 2-points. Well the image changed but still messed up. Cannot think of anything else, appreciate help from anyone who got it working…
Here is the code used in computing the new initial value of Halfspace:

``````vec3 v = {{1.0, 0.0, 0.0}, {0.309016994, 0.951056516, 0.0},
{-0.809016994, 0.587785252, 0.0} , {-0.809016994, -0.587785252, 0.0},
{0.309016994, -0.951056516, 0.0}};
vec4 HalfSpace;
int pj2;
for (int i=0; i<5; i++) {
int j1 = (i%2==0)?i/2:(i-1)/2;
int j2 = (i%2==0)?(j1+2):(pj2+1);
pj2 = j2;
HalfSpace[i] = vec4(v[j1]-v[j2], v[j1]-v[j2], 0.0, (v[j1]-v[j2])*v[j1] + (v[j1]-v[j2])*v[j1]);
}``````

I’m not sure if I get you right… If you have a sphere (a 3d entity) you’ve already got the normal direction as the vector from the center to the point. just normalize it. As for a xy-plane (all z=0), you’ve got the normal vector as (0,0,0)->(0,0,1). You can use the inverse-transpose matrix of mv to transform this normal (if memory serves me right).

edit: Remember that a coordinate and a vector usually differs. The vector is a DIRECTION and a LENGTH. You express the direction as a coordinate, WITH RESPECT TO (0,0,0). Matrices works on vectors. If you put a coordinate in, the matrix will understand it as a vector from (0,0,0). As for the normal of a sphere-vertex you have to subtract the center from the vertex to get a valid expression for the direction. That’s a straightforward operation though.

I know that’s standard graphics textbook stuff, but for a sphere centered at the origin that rotates, MV is fine for computing normals it is basically the identity matrix, and that’s was done in the red book code. Besides the normals affect ligthing and I don’t have an issue with the lighting. It is just the star color that does compute correctly, I am having an issue with “in or out” coloring of the star in red that does not seem to work properly and I am not sure of the math.

Well, I figured it out. It is much simpler than I thought. In case anyone else is interested, you simply gets the slopes of lines connecting non-consecutive vertices and get the slopes of the lines perpendicular to them (the product of 2 perpendicular lines is -1), that would give a,b,c (the normal) for the plane equation, and then set the d (the constant parameter) to how far (or big) from the origin our star is. Here is the code:

``````vec3 v = {{(float)(1.0), (float)(0.0), 0.0f}, {(float)(0.309016994), (float)(0.951056516), 0.0f},
{(float)(-0.809016994), (float)(0.587785252), 0.0f} , {(float)(-0.809016994), (float)(-0.587785252), 0.0f},
{(float)(0.309016994), (float)(-0.951056516), 0.0f}};
vector<vec4> HalfSpace(5);
int pj1;
for (int i=0; i<5; i++) {
int j2 = (i%2==0)?i/2:(i-1)/2;
int j1 = (i%2==0)?(j2+2):(pj1+1);
pj1 = j1;
//HalfSpace[i] = vec4(v[j2]*v[j1]-v[j1]*v[j2], v[j1]*v[j2]-v[j2]*v[j1], v[j2]*v[j1]-v[j1]*v[j2], 0.0);
HalfSpace[i] = vec4(v[j2]-v[j1], v[j1]-v[j2], 0, 0.05);
//HalfSpace[i] = normalize(HalfSpace[i]);
}``````

Well, I finally got the program to work, the tricky part is to maintain the origin on the positive side of all planes, here is the code for computing the planes:

`````` 	vec3 v = {{(float)(1.0), 0.0f, 0.0f}, {(float)(0.309016994), (float)(0.951056516), 0.0f},
{(float)(-0.809016994), (float)(0.587785252), 0.0f} , {(float)(-0.809016994), (float)(-0.587785252), 0.0f},
{(float)(0.309016994), (float)(-0.951056516), 0.0f}};
vec4 HalfSpace;
float scale = 0.5;
int pj1;
int ks=1;
for (int i=0; i<5; i++) {
int j2 = i>>1;
int j1 = ((i&1)==0)?(j2+2):(pj1+1);
pj1 = j1;
ks = (~ks)+1;
HalfSpace[i] = (ks>0)?vec4((v[j1]-v[j2]), (v[j2]-v[j1]), 0.0, (v[j1]*scale*(v[j2]-v[j1])+v[j1]*scale*(v[j1]-v[j2]))):
vec4((v[j2]-v[j1]), (v[j1]-v[j2]), 0.0, (v[j1]*scale*(v[j1]-v[j2])+v[j1]*scale*(v[j2]-v[j1])));
}
``````

Also another bug in the code is computing the normal by passing the transpose of the inverse of the model view matrix, and multiply the coordinates of the sphere - origin by it to get the transformed normal. Other than that the code of the fshader in the red book is fine…
Well, I thought of sharing my experience just in case another novice need it…

Well, I still have a question for the gurus. I had to generate the triangular strips that constitute the sphere with an angle of 0.2 degrees in between latitudes and longitudes to get a satisfactory looking star without jaggies or any artifact. This causes a generation of a large number of vertices which consume memory and causes a noticeable delay at the startup.
My question is: can I generate a satisfactory looking star while generating a decent sphere with an angle of 20 degrees between latitudes and longitudes by handling the fragment coordinates in the pixel shader. I know there is gl_FragCoord which consist of (x, y, z, 1/w). I can’t quite get it straight how is it related to the model coordinates. That is given gl_FragCoord, how can I deduce the corresponding sphere pixel? Multiply by inverse of MVP ?

A fragment shader cannot change the position of a fragment. It’s possible to use techniques such as parallax mapping to make geometry appear more complex than it really is, but you can’t change the geometry’s silhouette.

gl_FragCoord is the fragment’s position in window coordinates. You can convert window coordinates back to NDC by applying the inverse of the viewport and depth range transformations. You can convert NDC to clip space by dividing by 1/w (i.e. by `gl_FragCoord.w`). You can convert clip space back to any other transformation by applying the inverse of whatever matrix you used to transform that to clip space. But if you actually want clip-space coordinates in the fragment shader, you’re better off just storing the values in a user-defined output variable (as well as in `gl_Position`) and reading the corresponding input variable in the fragment shader.

Thanks, I found just the right code on Khronos.org. It does what you described. I am including it here for reference to any novice:

``````viewport = vec4(0.0, 0.0, 500.0, 500.0);
ndcPos.xy = ((2.0 * gl_FragCoord.xy) - (2.0 * viewport.xy)) / (viewport.zw) - 1;
ndcPos.z = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
ndcPos.w = 1.0;
clipPos = ndcPos / gl_FragCoord.w;