Specular per-pixel lighting problem

I am using Blinn’s formula to do specular lighting with ARB_VP and ARB_FP.
For every vector(in ARB_VP):
1)I compute the light vector and normalize it.
2)I compute the view vector and normalize it.
3)I compute H=L+V,normalize H and store it in texcoord[2]

For every fragment(in ARB_FP):
1)I get the interpolated texcoord[2],normalize it and store it in H.
2)I get the interpolated normal(N) from some other texcoord and proceed with the regular computations.

Now,for a reasonably large surface,the specular highlight is completely wrong.But,if i don’t normalize the vectors in the vertex program and just keep the renormalization of H in the fragment program,the results look OK!

What do you think is happening?

If one interpolates a normalized vector, opposed to a not normalized vector, those two results may be very different (that means, they show into different directions).

So don´t normalize your half-vector in the vp! Only normalize it in the fp.
A normalized vector usually behaves very different in most computations. Therefore one usually normalizes a vector just before the computation, which needs a normalized vector. The interpolation of the half-vector does not NEED normalized vectors, it can interpolate everything. However the results will be different (and incorrect with normalized vectors).

Hope that helps.
Jan.

Surprise, surprise!

As long as you keep the output of the vertex shader linear (with respect to eye space), you will not witness any tesselation artifacts.

So move normalization from vertex shader into pixel shader, if you can afford it.

Maybe I’m completly on the wrong track, but I think the opposite is true.
What you are basically saying is the following. If you interpolate between normalized vectors, you get wrong results, while when interpolating between the non-normalized ones, you get a correct result.

Take the following example. You have two vectors with directions perpendicular to each other. Now make one vector twice as long as the other. This result a very different vector compared to having it four times as long for example.
We all agree, that there can be only one right solution, so you have to use the normalized ones for interpolation.

Now on to the original question: If you have two normals that face in almost opposite directions, linearly interpolating between them results in a very short vector since the two normals’ components almost cancel each other out.
If this vector gets small enough, the numerical errors present when doing the interpolation will probably result in a zero-component somewhere in your normal.
When renormizing, this component just stays zero and you get a wrong normal.
I don’t know if it helps, but maybe you can scale all your normal in the vertex-program (as long as you scale them all with the same factor, you’ll still get proper results).

Regards,

Jan

Calculate in Vertex shader:

L* = vertex position - light position, unnormalized
N* = vertex normal (carry though)

store these in texture coords
L* is linear in eye space, so will be without artifacts when interpolated

Calculate in pixel shader:

L = normalized( L* )
N = normalized( N* )
H = normalized( L + N )

This way you get 100% tesselation-agnostic per-pixel lighting. If you use H to calculate a reflection vector for env-mapping, you get 100% tesselation-agnostic reflection too.

The trick is to do the non-linear math in FP (expensive though).

Originally posted by Jan2000:
[b]If one interpolates a normalized vector, opposed to a not normalized vector, those two results may be very different (that means, they show into different directions).

So don´t normalize your half-vector in the vp! Only normalize it in the fp.
A normalized vector usually behaves very different in most computations. Therefore one usually normalizes a vector just before the computation, which needs a normalized vector. The interpolation of the half-vector does not NEED normalized vectors, it can interpolate everything. However the results will be different (and incorrect with normalized vectors).

Hope that helps.
Jan.[/b]
That’s absolutely right. If you normalize H in your vertex program, the specular highlight will depend on the geometry (triangle shape). So never, ever do this. I think even Carmack has done this wrong, so he stated in a .plan some time ago:
“Per-pixel reflection vector calculations for specular, instead of an
interpolated half-angle. The only remaining effect that has any visual
dependency on the underlying geometry is the shape of the specular highlight.
Ideally, you want the same final image for a surface regardless of if it is
two giant triangles, or a mesh of 1024 triangles. This will not be true if
any calculation done at a vertex involves anything other than linear math
operations. The specular half-angle calculation involves normalizations, so
the interpolation across triangles on a surface will be dependent on exactly
where the vertexes are located. The most visible end result of this is that
on large, flat, shiny surfaces where you expect a clean highlight circle
moving across it, you wind up with a highlight that distorts into an L shape
around the triangulation line.”

Oh, and renormalizing N in the fragment program also gives slightly better bump mapping but results in very unpleasant aliasing. This could be resolved by building the mip maps of the bump map correctly yourself but i havn’t tested this yet…

Ok,I’ll sum things up.
I want to use,if possible,an interpolated halfangle and not compute it for each fragment.

The problem is this:
In the vertex shaders,I compute:H=L+V
If I normalize L and V,I get wrong results.
If L and V are unnormalized,I get correct results.
In both circumstaces,if I normalize H I get wrong results.
In fragment program,I normalize H with math.
I can understand why I shouldn’t normalize H in vertex shaders(and instead do it in fp) but:

Why L and V should be unnormalized when computing H for each vertex?

Every document I’ve read says the opposite and even to me it doesn’t sound right.
Can someone who has implemented per-pixel specular lighting and got things right describe what he has done?

Well, i DID implement this, and i DID get it right.

jeickmann: What you describe is an artificial situation. The three normals which get calculated at the vertices and get interpolated get all computed with THE SAME FORMULA. Therefore they can be interpolated. What you describe would be completely different vectors, “picked at random”. Of course that will make problems. Take a vector and its negative vector and try to interplate that. Normalized or not, it will fail. You have to play by the rules.

Why L and V should be unnormalized when computing H for each vertex?

They don´t. As i already said, you musn´t normalize the half-vector, but of course you have to normalize the view and the light-vector. If you don´t, your half-vector will point into unpredictable directions, resulting in wrong specular highlights.

If you do this, but get wrong results, then there is some other error in your code.

Maybe you have some normal in [0,1] range and forgot to put it into [-1,1] range? It´s easy to forget that. All normals in normalmaps are stored in [0,1] range.

Jan.

Originally posted by Jan2000:
They don´t. As i already said, you musn´t normalize the half-vector, but of course you have to normalize the view and the light-vector. If you don´t, your half-vector will point into unpredictable directions, resulting in wrong specular highlights.
If you do this, but get wrong results, then there is some other error in your code.
Jan.[/QB]
I’ve thought of that,and checked my code for errors,but I can’t find anything.The thing is,with higher tesselation,the results are correct.Are you sure the light and view vectors have to be normalized?As I read in Carmack’s .plan posted,we must not use anything but linear calculations in vp:
“This will not be true if
any calculation done at a vertex involves anything other than linear math
operations.”

And he states that calculating the half vector requires “normalizations”(plural),not normalization.The plural must imply the normalizations of light and view vectors.

Carmack is arguing against using the half-angle vector at all for specular. Instead, he uses the reflection vector. He uses Blinn specular rather than Phong. These are two different methods for producing specular illumination.

Originally posted by Korval:
Carmack is arguing against using the half-angle vector at all for specular. Instead, he uses the reflection vector. He uses Blinn specular rather than Phong. These are two different methods for producing specular illumination.
Sure. But he said that “The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight” when using the half-angle vector, which is NOT true as i think (or am i getting something wrong?). That’s what’s strange about it.
You can find the .plan i’m talking about at Blue's News .plans . He also states that he will probably allow to use the light reflection vector instead of half-angle vector as some kind of high quality rendering option. The default will be specular lighting with H vector still (Would be a mess to do on cards < nV3x/R3xx).

He also argues against ANY non-linear math done in vertex programs.
Think about it:If it’s wrong to interpolate the normalized halfangle,isn’t it also wrong to interpolate the sum of two normalized vectors?I mean,if we interpolate the normalized light and view vectors using 2 texcoords and calculate the halfangle in fp,then we will get wrong results.

I did a little test:I used a scene with a large surface(like a floor) and a single light.I computed the halfangle using three approaches and I made the FP assign each fragment the value of halfangle,in order to have a visual aspect of the situation.

1)Compute in VP L and V(unnormalized),interpolate them and in FP compute H(L+V),normalize it.This is definetely correct.
2)Compute in VP L and V(unnormalized),compute H(L+V),interpolate it and in FP normalize it.
3)Compute in VP L and V,normalize them,compute H(L+V),interpolate it and in FP normalize it.

I got three images with strange tones of reds,blues,yellows which represent the direction of halfangle.

As I said,(1) is correct.Now,(2) spits an image that is similar to (1).Not identical,but similar.
On the other hand,(3) spits an image that is nothing like (1) or (2).

I’m sure that my code is right,because it is so simple.Just a large quad,no bumpmapping,all the work is done in the shaders like i described above.

Originally posted by mikeman:
[b]He also argues against ANY non-linear math done in vertex programs.
Think about it:If it’s wrong to interpolate the normalized halfangle,isn’t it also wrong to interpolate the sum of two normalized vectors?I mean,if we interpolate the normalized light and view vectors using 2 texcoords and calculate the halfangle in fp,then we will get wrong results.

I did a little test:I used a scene with a large surface(like a floor) and a single light.I computed the halfangle using three approaches and I made the FP assign each fragment the value of halfangle,in order to have a visual aspect of the situation.

1)Compute in VP L and V(unnormalized),interpolate them and in FP compute H(L+V),normalize it.This is definetely correct.
2)Compute in VP L and V(unnormalized),compute H(L+V),interpolate it and in FP normalize it.
3)Compute in VP L and V,normalize them,compute H(L+V),interpolate it and in FP normalize it.

I got three images with strange tones of reds,blues,yellows which represent the direction of halfangle.

As I said,(1) is correct.Now,(2) spits an image that is similar to (1).Not identical,but similar.
On the other hand,(3) spits an image that is nothing like (1) or (2).

I’m sure that my code is right,because it is so simple.Just a large quad,no bumpmapping,all the work is done in the shaders like i described above.[/b]
These are the same results as i have (except #1, which i haven’t tried yet). I think the difference between 1 and 2 may be because of different precision between the FP and texture coord/VP precision. But 1) should be slower because you are calculating H at every pixel whereas in 2) and 3) only at every vertex. Maybe you could post some pics of the three tests with a real specular highlight calculation and a black diffuse color. This would show the differences as they would (well, almost) appear in a real application.

Originally posted by Jens Scheddin:
But he said that “The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight” when using the half-angle vector, which is NOT true as i think (or am i getting something wrong?).
It took that to mean that some materials are isotropic while others are anisotropic. Using Blinn or Phong leads to a isotropic BRDF while using something like Torrance-Cook leads to an anisotropic BRDF.

Originally posted by PK:
[quote]Originally posted by Jens Scheddin:
But he said that “The only remaining effect that has any visual dependency on the underlying geometry is the shape of the specular highlight” when using the half-angle vector, which is NOT true as i think (or am i getting something wrong?).
It took that to mean that some materials are isotropic while others are anisotropic. Using Blinn or Phong leads to a isotropic BRDF while using something like Torrance-Cook leads to an anisotropic BRDF.
[/QUOTE]I currently don’t know exactly what the difference between isotropic and anisotropic lighting is. I only remember of some early directx8 shader demos showing anisotropic lighting but never took a look what’s behind that. Maybe you could explain for some dumb member? :smiley:

mikeman: Maybe you should post your code. There is nothing left, i could think of.

However, it is absolutely logical, that you have to normalize the view and light-vector. Take a piece of paper and draw it. It will become very clear, why the one will result in defined (and consistant) behaviour and the other one not.
Because if the view or light-vector have diferent lengths (which happens when the viewer moves, or the light), the half-vector will change, although it shouldn´t, because only the DIRECTION matters, not the length. After a normalization it is always correct.

Jan.

Originally posted by Jens Scheddin:
[quote]Originally posted by PK:
It took that to mean that some materials are isotropic while others are anisotropic. Using Blinn or Phong leads to a isotropic BRDF while using something like Torrance-Cook leads to an anisotropic BRDF.
I currently don’t know exactly what the difference between isotropic and anisotropic lighting is. I only remember of some early directx8 shader demos showing anisotropic lighting but never took a look what’s behind that. Maybe you could explain for some dumb member? :smiley:
[/QUOTE]Some materials have a rough surface some materials have a smooth surface.

Blinn and Phong are better at modeling smooth surfaces because they give you a perfectly round specular highlight. Whenever you see a cosine term, always think circle. Ask yourself where the circle is.

Originally posted by Jan2000:
However, it is absolutely logical, that you have to normalize the view and light-vector. Take a piece of paper and draw it. It will become very clear, why the one will result in defined (and consistant) behaviour and the other one not.
Because if the view or light-vector have diferent lengths (which happens when the viewer moves, or the light), the half-vector will change, although it shouldn´t, because only the DIRECTION matters, not the length. After a normalization it is always correct.

Jan2000,You’re absolutely right on that.The fact I get correct results with (2) is a coincidence.
All I was trying to do is find out if I could calculate the halfangle only per-vertex and still have a perfectly smooth highlight with very low tesselation.
I think now that we can’t have that.Carmack states that computing the halfangle requires normalizations(for L and V),and that’s why the highlight will have the wrong shape in large surfaces:

“The specular half-angle calculation involves normalizations, so
the interpolation across triangles on a surface will be dependent on exactly
where the vertexes are located. The most visible end result of this is that
on large, flat, shiny surfaces where you expect a clean highlight circle
moving across it, you wind up with a highlight that distorts into an L shape
around the triangulation line.”

If this is right,then the specular highlights in your application won’t look good on large flat surfaces made up by just two triangles.
The only thing we can do to get perfect specular highlight in any tesselation is interpolate L and V(unnormalized) and calculate H per-fragment.

Although I don’t understand why Carmack is proposing calculating the reflection vector per-fragment instead of intepolating a halfangle.Calculating the halfangle per-fragment works just fine and is much cheaper.

Originally posted by Jens Scheddin:
// Calculate light vector (L)
“SUB lightVec, lightPos, iPos;”
// Calculate half-angle vector (H)
“SUB halfVec, viewPos, iPos;”
“ADD halfVec, lightVec, halfVec;”

When you’re calculating halfangle,you MUST normalize L and V in order to get correct results.
What you(and I) are doing works in some cases,where L and V have similar lengths.If,for example,L is long and V is much sorter,you end up with H=L.This,of course,is wrong.The thing is,normalizing L and V is more wrong(!!),since,as it turns out,doing non-linear math in VP is a deadly sin(if you want your lighting to be tesselation-indepedent).