I’m using tangentspace bumpmapping and am adding some fake specular highlights with a lightmap, with this:
float specular = texture2D(lightMap, reflect(eyeVecTSN, normalTS).xy * .5 + .5).r;
I get texture coordinates in the correct range and they vary pretty well in the large, but there are small artifacts that get worse when I zoom in. See these pictures:
The normal map seems ok, and the artifacts doesn’t appear to be per-texel either. I’ve normalized everything just to be sure (didn’t help much).
As you can see, the coords are correct locally, but there are blocks with different offsets. I’ve never encountered anything like it. I suspect numerical issues, but I don’t know enough about it to figure out if and/or what is wrong.
This is on a GeForce 7800 with vec/float precision (no difference from half).
All help welcome!
Just in case: make sure you have texture filtering enabled.
Blocky artifacts can also occur if you work with floats that are in different range. An example:
float a = 1000.0;
float b = 0.01;
float c = 1000.1;
float d = 0.1;
float x = (a + b) - (c + d);
/* bad results */
/* x = 1000.01 - 1000.2 */
/* x = 100?.?? - 100?.? - precision loss after 3 digits */
/* x = 0.0 */
float x = (a - c) + (b - d);
/* good results */
/* x = -0.1 + (-0.09) */
/* x = -0.100?? + (-0.0900??) - precision loss after 3 digits */
/* x = -0.190 */
Review your code for calculating eyeVecTSN and normalTS.
the eye vector should not be normalised at the vertex level, only at the pixel level.
Might I ask why the eye vector should not be normalized? Is there a good and detailed doc about this somewhere?
I had some wierdness in there, (very different length on tangents vs. normals was one) but that’s not all of it.
It’s better, but not perfect. Thanks for the help, k_szczech.
Originally posted by Lord crc:
Might I ask why the eye vector should not be normalized?
If you normalize the eye vector in vertex shader, you will get incorrect interpolation because important information is lost.
In the following image the A and B are two vertices of polygon (values bellow are corresponding x coordinates). E represents point which is slightly above the polygon (e.g. 0.1) and has x coordinate of zero.
-3 0 1
If you interpolate between vectors A->E (3,0.1) and B->E (-1,0.1) the interpolated value in place with zero x coordinate (interpolation factor 0.75) will be (0,0.1) so it will correctly point in upwards direction towards the E. If you normalize both vectors before the interpolation you will interpolate from value that is ( 0.99944490,0.0333148) to value that is (-0.99503719,0.099503) and the interpolated value for the same interpolation factor (and thus in the same space on the polygon) will be (-0.4964166675,0.082955) which has certainly a wrong direction because has nonzero x component so it does not point towards the E.
and it only becomes apparent when the eye point is very close to the polygon. You should get a perfect specular highlight no matter how far you are from it.
Nice. That helped with those artifacts. I still get the jittery stuff though.
Could that be because I have an insufficiently tesselated mesh? (Various reasons for this at the moment)
when you say jittery stuff, what do you mean?
if you’re saying the whole specular reflection moves randomly about as the viewpoint moves, then maybe your object is too big or placed at too great a distance from the origin and you’re seeing the limits of single precision floats.
Sorry, I meant that not normalizing the eye vector helped with artifacts when going too close to the polygons. However, that seldom occurs in my app, I merely got that screesnhot to show the artifacts I’m really trying to fix better.
What I did mean was the “checkered” sawtooth artifacts. The highlight texturing in the pictures I’ve posted above isn’t smooth, but rather jagged.
It seems as if the texture coordinates are making jumps at irregular intervals. These jumps are aligned with the UV mapping direction, but they do not align with texel value changes (plus, I do have bilinear enabled).
My object is rougly 0.2 in size, with a near/far at 0.001/0.5. (I’m dealing with a rather small scale here, but it’s not that tiny, IMHO.)
that is really small.
try scaling everything up 100 times and see what happens.
Didn’t help much.
However, all I could do was to put a transform on the top of everything, which essentially means that the clip-space coordinates are equivalent, and since I’m seeing errors at the fragment level, I find it hard to believe it should fix anything.
(I.e. it’s floating point, so whether I’m at a scale of 1e2 or 1e-2 shouldn’t matter, as long as I have enough significant digits I should get 0…1 when interpolating the polys)
I could also hack the vertex program to scale things up, but that’s equivalent to changing the modelview matrix.
Certainly a weird specular highlight you’ve got there. You say you’ve checked your texture filtering parameters?
Other than that, haven’t a clue, sorry. Good luck.
Truly. The jaggies don’t match texels / texture resolution, so I doubt there’s filtering involved.
Ok. Thanks for helping out anyway.
Thanks for the normalization explanation.
are u using uncompressed textures?
ie not dxt1/3/5
Have you tried using 16bit/channel textures? Keep in mind that most hardware interpolates 8bit/ch textures in 8bit, so even if you don’t have any more precise data you could get better quality by upscaling it to a 16bit/ch texture.
Hm. I can’t get things better by shifting the format (will try harder soon). However, I think you’re right in that I’m seeing these interpolation artifacts.
Especially, if I render gl_fragColor = fwidth(texture2d(normalMap, uv)) * 20, I get lines (indicating steps in texture) that match the discontinuities I’m seeing.
(Doing this confirms that I have linear interpolation and no compression, btw.)
Also, if I generate the normal procedurally (sin/cos noise based on uv), I don’t get any artifacts.