mirrors and wrong projection (or something)

Hi, i’ve used FBO and oblique frustum culling to render mirrors in a scene.
I render the scene and then i project it in a mesh (usually 2 triangles forming a plane) to get a mirror.

Usually it works pretty well, in fact if i see the 4 vertices of the polygon everything is ok as u see in this screenshot.

Mirror Screenshot 1

The problem arises when i don’t see some vertices of the mirror, then te texture is clamped (as expected as i specify it as GL_TEXTURE_CLAMP_TO_EDGE) and i get this

Mirror Screenshot 2

Anybody has experienced the same problem? perhaps GL_TEXTURE_CLAMP_TO_EDGE is not the correct mode? or perhaps should i tesselate a bit more the mesh where i project the texture?

Thanx in advance,

Toni

I have seen something like this before.
I am willing to bet it is to do with how you are projecting the texture and not taking into account the “w” coordinate. (ie. are you just setting it to 1 in a vertex programe or are you using the “w” from the position?)

Can you describe how you are projecting the texture?

Of course,

this is the vp

  
 struct TexProjMatricesParam
{
   float4x4 TexProjMatrix0;
};

struct TexCoordConnector
{
   float4 texCoord0  : TEXCOORD0;// objectPos
};

struct SamplerContainer
{
  sampler2D diffuseSampler: TEXUNIT0;
};


struct appin
{
    float4 Position     : POSITION;
    float4 Normal       : NORMAL;
};

// define outputs from vertex shader
struct vertout
{
    float4 HPosition    : POSITION;
    TexCoordConnector TexCoords;
};

vertout main(appin IN,
            uniform TexProjMatricesParam TexProjMatrices,
			uniform float4x4 mvp)
{
    


	vertout OUT;
	OUT.HPosition = mul(mvp, IN.Position);
	OUT.TexCoords.texCoord0=mul (TexProjMatrices.TexProjMatrix0,IN.Position);

	return OUT;
} // main

as u see nothing really unusual,
the matrix used for projecting is the modelview of the mirror camera * the oblique projection matrix and scaled and traslated 0.5 in each axis

Thanx in advance

Just an idea, to narrow down the problem - could you save the data of the (source, texture-) FBO to a file when the display is wrong? Just to see if the error is present already then.

Another thought that struck me… could the non-visible vertices that define the “mirror” be outside the viewport by about the same number of pixels that the clamping is displaying?

As for tesselating the surface you project the texture on, I don’t see how this could help. If anything, bringing back the dark ages where we clipped in software could be a way to make really, really sure. :slight_smile:

I experienced the exact same problem while doing refraction for a water shader i was writing. The idea was to grab the already opaque objects rendered within the scene from the frame buffer and project them in a manner similar to what you are using. I used to get the exact same artifacts on nVidia hardware and black portion instead of these artifacts on ATI hardware! You would also notice the error creep up as you move away from the mirror (while keeping the 4 vertices within the frustum, cant explain the sort of error but i am sure you will notice it once you do it yourself :slight_smile: ).

As it turns out, there were a couple of things that i was doing wrong. Firstly, you have to do a perspective divide by the w-coordinate. Secondly (the more important one) after perspective divide you get the coordinates in the -1, 1 range, you have to scale and bias them to the 0, 1 range yourself by doing the following

texCoord0 = (texCoord0 + 1.0) * 0.5;

I hope this fixes your problem, because mine did. Do send us a screen shot after fix :slight_smile: .

Originally posted by Zulfiqar Malik:
Firstly, you have to do a perspective divide by the w-coordinate.
That’s spot on.

I’d like to add that the division is to performed per fragment, not per vertex. You’ll have to implement it in your fragment shader.
In ARB_fragment_program you’d use TXP instead of TEX, and you’d be done. I’m sure GLSL has this capability too, I just don’t know the syntax right now – it should say something to the extent of “… divides the s,t,r components of the texcoord by its q component before using the result to look up the texture”.

This “special” mode of texture lookup exists exactly for this case.

Originally posted by Zulfiqar Malik:
Secondly (the more important one) after perspective divide you get the coordinates in the -1, 1 range, you have to scale and bias them to the 0, 1 range yourself <…>
Not sure why that would be required. The result can only be negative if either any texcoord component (X)or w is negative, and both should be positive anyway.

Btw:

Originally posted by Zulfiqar Malik:
I used to get the exact same artifacts on nVidia hardware and black portion instead of these artifacts on ATI hardware!
This points to a disagreement about how CLAMP_TO_BORDER should work. I’d put the blame on NVIDIA in this case, since the unwanted image looks like it uses CLAMP_TO_EDGE.

Originally posted by tamlin:
[b]
Another thought that struck me… could the non-visible vertices that define the “mirror” be outside the viewport by about the same number of pixels that the clamping is displaying?

[/b]
I’m starting to think that is what it’s happening tamlin as i don’t find any other explanation for this.

Originally posted by: zeckensack

[quote]
quote:Originally posted by Zulfiqar Malik:
Secondly (the more important one) after perspective divide you get the coordinates in the -1, 1 range, you have to scale and bias them to the 0, 1 range yourself <…>

Not sure why that would be required. The result can only be negative if either any texcoord component (X)or w is negative, and both should be positive anyway.

[/QUOTE]If you study the pipeline, you’d realize that after perspective divide you get a 3D-cube in the [-1, 1] range, so that means the resulting texCoord.xyz will be in [-1, 1] range. Why i required the scaling and bias to bring it in [0, 1] range (and hence thought that toni might require it as well) was that i used the texCoord.xy to do a lookup in a texture for a texel to put up on the refracting surface. Obviously using negative texture coordinates gave me wrong lookups, and that’s the reason i had to do scaling and biasing to get it into the proper [0, 1] range for texel lookups.

Hope that helps clear things up zeckensack.

Originally posted by zeckensack:
[b]That’s spot on.

I’d like to add that the division is to performed per fragment, not per vertex. You’ll have to implement it in your fragment shader.
In ARB_fragment_program you’d use TXP instead of TEX, and you’d be done. I’m sure GLSL has this capability too, I just don’t know the syntax right now – it should say something to the extent of “… divides the s,t,r components of the texcoord by its q component before using the result to look up the texture”.

This “special” mode of texture lookup exists exactly for this case.

[/b]
Well, as i stated above, i pass to the vertex program de ModelView*Projection of the mirror camera, so multiplying it by the IN.Position i get the vertex (that is in world space) in post projective as seen from the camera mirror, am i right?
Then i biass and scale it to have it in 0…1 space.
Then in the fragment program i simply call

[b]

[quote]Originally posted by Zulfiqar Malik:
I used to get the exact same artifacts on nVidia hardware and black portion instead of these artifacts on ATI hardware!
This points to a disagreement about how CLAMP_TO_BORDER should work. I’d put the blame on NVIDIA in this case, since the unwanted image looks like it uses CLAMP_TO_EDGE.[/b][/QUOTE]Yeah i use clamp to edge :slight_smile:

Originally posted by Zulfiqar Malik:
[QUOTE]If you study the pipeline, you’d realize that after perspective divide you get a 3D-cube in the [-1, 1] range, so that means the resulting texCoord.xyz will be in [-1, 1] range.
Ahhh, of course.
gluPerspective puts the target frustum center at (0;0;0) … I see now.
Maybe the correction should be done by the texture matrix. glFrustum instead of gluPerspective might do the trick. Scale and bias per fragment would be much more expensive – it forces you on the “dependent read” path.

Originally posted by toni

Well, as i stated above, i pass to the vertex program de ModelView*Projection of the mirror camera, so multiplying it by the IN.Position i get the vertex (that is in world space) in post projective as seen from the camera mirror, am i right?
Then i biass and scale it to have it in 0…1 space.
Then in the fragment program i simply call

Try doing the perspective divide yourself, and THEN scaling and biasing it in the [0, 1] range as i mentioned above.

I’ve had an idea that at the begining is totally weird but … well i don’t have any other that explains this behaviour so perhaps someone that knows more than me can enlighten me :slight_smile:

I have the polygon where i am going to project the texture, given a vertex V that is out of the mirror projecting frustum i multiply MVPmirror*V and i get some coordinates, theoretically out of the -1…1 cube in post projecting space. Then I biass and scale them to have them in 0…1 space.
I pass those coordinates to the fragment program using one of the interpolators and if (and now here comes the speculation) for some reason those coordinates are clamped to range 0…1 BEFORE i can use them to do the texture fetch, that will explain why i see what i see.

If i remember correctly i can put whatever values i want in the TEXCOORCD0 semantics, but… now i’m starting to doubt that :slight_smile:

Toni

Originally posted by toni:
[b]I’ve had an idea that at the begining is totally weird but … well i don’t have any other that explains this behaviour so perhaps someone that knows more than me can enlighten me :slight_smile:

I have the polygon where i am going to project the texture, given a vertex V that is out of the mirror projecting frustum i multiply MVPmirror*V and i get some coordinates, theoretically out of the -1…1 cube in post projecting space. Then I biass and scale them to have them in 0…1 space, but as this coordinates are out of the frustum i would have them in some other range greater than 1 or lesser than 0.
I pass those coordinates to the fragment program using one of the interpolators and if (and now here comes the speculation) for some reason those coordinates are clamped to range 0…1 BEFORE i can use them to do the texture fetch, that will explain why i see what i see.

If i remember correctly i can put whatever values i want in the TEXCOORCD0 semantics, but… now i’m starting to doubt that :slight_smile:

Toni[/b]

.
(strangest double post ever)

Originally posted by toni:
[b]
I’ve had an idea that at the begining is totally weird but … well i don’t have any other that explains this behaviour so perhaps someone that knows more than me can enlighten me :slight_smile:

I have the polygon where i am going to project the texture, given a vertex V that is out of the mirror projecting frustum i multiply MVPmirror*V and i get some coordinates, theoretically out of the -1…1 cube in post projecting space. Then I biass and scale them to have them in 0…1 space.
I pass those coordinates to the fragment program using one of the interpolators and if (and now here comes the speculation) for some reason those coordinates are clamped to range 0…1 BEFORE i can use them to do the texture fetch, that will explain why i see what i see.

If i remember correctly i can put whatever values i want in the TEXCOORCD0 semantics, but… now i’m starting to doubt that :slight_smile: [/b]

Okay, here is how interpolators work. You specify something per-vertex and the interpolater will give you per-fragment values that are lineraly (taking perspective into account) interpolated between two successive vertexes. So if for two vertexes, your value never goes out of the [0, 1] range, then there is no way on earth the interpolator is gonna give you a value beyond this range. Get it? You can be sure that your interpolator is working correctly because they work much deeper down the pipeline and are not dependent on the type of shading language you choose. So your interpolators should work the same in CG, GLSL and HLSL for a particular hardware.

Well, problem solved.
It was my fault not my program’s fault.
I use some objects as RTT Objects, in culling phase, if i “see” one of these objects i set another culling query and another render query. The RTT2D Objects have a quad put somewehere and i reflect the current modelview matrix against that plane and i use oblique frustum culling to cull all the geometry behind the mirror.

Ok. I had put the RTT Object (and hence the plane) slighty in front of the actual mirror, so when projecting it didn’t project correctly, because the actual mirror’s vertices were outside of the projection of the RTTObject (due oblique frustum culling).

Anyway, i know it is a complete mess, but thanx all for your replies and i post here 2 screenshots showing how the mirrors look now :slight_smile:

Thanx Again

Toni

Mirror 1
Mirror 2