I need to implement screen space reflections.

As a first step, I want my fragment shader to draw one reflected, red ray.

That ray shall begin at world pos x=100,y=0,z=100. For debugging, this world position is drawn as a green square, you can see it in the screen shot.

The ray needs to be the imagined ray from the camera to world pos 100,0,100 (the green square), reflected on the ground (i.e. reflected at pos 100,0,100).

The camera x and z values are marked by a big yellow square, to verify the camera position is transferred correctly via a uniform vector into the shader - seems as if this works.

My shader whose code is listed below recalculates the very same reflected ray for each fragment and checks if the current fragment is near to the ray, if this is the case, the current fragment is drawn in red, otherwise the fragment is read from a previously rendered screen buffer texture.

Here’s a screen shot how this looks like:

[ATTACH=CONFIG]1278[/ATTACH]

Here’s my fragment shader code:

```
in vec3 in_pos;
in vec3 norm_wc;
in vec4 pos_wc;
in vec4 gl_FragCoord;
out vec4 out_col;
uniform mat4 proj;
uniform mat4 view;
uniform mat4 model;
uniform vec3 cam_pos;
uniform int comb;
uniform sampler2D screenbuffer_tex;
void main() {
vec3 CamPos = cam_pos; // the camera position
vec3 RayPosOriginal = vec3(100.0, 0.0, 100.0); // the green square
vec3 CamToFragmentDir = ((comb % 16) >= 8 ? +1.0 : -1.0) * normalize(RayPosOriginal - CamPos); // the ray from camera to green square
vec3 GroundNormal; // the normal at the green square
if ((comb % 8) == 0)
GroundNormal = normalize(vec3(-1.0, -1.0, -1.0));
if ((comb % 8) == 1)
GroundNormal = normalize(vec3(+1.0, -1.0, -1.0));
if ((comb % 8) == 2)
GroundNormal = normalize(vec3(-1.0, +1.0, -1.0));
if ((comb % 8) == 3)
GroundNormal = normalize(vec3(-1.0, -1.0, +1.0));
if ((comb % 8) == 4)
GroundNormal = normalize(vec3(+1.0, +1.0, -1.0));
if ((comb % 8) == 5)
GroundNormal = normalize(vec3(+1.0, -1.0, +1.0));
if ((comb % 8) == 6)
GroundNormal = normalize(vec3(-1.0, +1.0, +1.0));
if ((comb % 8) == 7)
GroundNormal = normalize(vec3(+1.0, +1.0, +1.0));
vec3 ReflectionDir = normalize(reflect(CamToFragmentDir, GroundNormal));
vec3 RayPosCurrent; // in world coordinated
vec4 RayPosCurrentOnScreen; // in screen coordinates
mat4 PVM = proj * view * model;
float RedFactor = 0.0; // black at the ray origin, pure red at full length
for (float RayLength = 0.0; RayLength <= 500.0; RayLength ++)
{
RayPosCurrent = RayPosOriginal + (ReflectionDir * RayLength);
RayPosCurrentOnScreen = PVM * vec4(RayPosCurrent, 1.0);
RayPosCurrentOnScreen.x = RayPosCurrentOnScreen.x * 0.5 / RayPosCurrentOnScreen.w + 0.5;
RayPosCurrentOnScreen.y = RayPosCurrentOnScreen.y * 0.5 / RayPosCurrentOnScreen.w + 0.5;
if (abs(RayPosCurrentOnScreen.x - (gl_FragCoord.x / 800.0)) < 0.005 &&
abs(RayPosCurrentOnScreen.y - (gl_FragCoord.y / 600.0)) < 0.005)
RedFactor = RayLength;
}
if (RedFactor > 0.0)
out_col = vec4(RedFactor / 512.0, 0.0, 0.0, 1.0); // current fragment is in the ray => draw red
else
out_col = texture(screenbuffer_tex, vec2(gl_FragCoord.x / 800.0, gl_FragCoord.y / 600.0)); // don't draw the ray here
if (pos_wc.x > 90.0 && pos_wc.x < 110 &&
pos_wc.z > 90.0 && pos_wc.z < 110)
out_col = vec4(0.0, 1.0, 0.0, 1.0); // draw the green square, the origin of the ray
if (abs(pos_wc.x - CamPos.x) < 50 &&
abs(pos_wc.z - CamPos.z) < 50)
out_col = vec4(1.0, 1.0, 0.0, 1.0); // mark the camera pos in yellow
}
```

My problem is that the ray is never reflected correctly. In my despair I implemented a hack, the uniform ‘comb’[ination], an integer value that can be increased per key press. But for none of the ‘comb’ values between 0 and 15 the ray is drawn correctly. It either points into the wrong direction, or begins at an odd position, depending on the ‘comb’ value.

My question: can anyone see an obvious error in the shader code?

Thanks!

PS: I know this code is a nasty hack but I’m a beginner and clean code will come later on in this project, when it works at all