Hello folks.

I am working on a gpu raycaster for my thesis and i works pretty well. First i used the multipass algotithm were you render front and the back faces of a cube, in order to get the starting position and the ray direction. But now i have implemented the single pass method where you should be able to derive the direction vector

form the camera position and the interpolated vertexposition. But it looks kind of strange and when i move the camera it kind of scews, and every article i have read say that it is so easy. So the question is if some of you have a solution to this problem?

Thanks in advance Trier

It’s hard to say what’s your problem without knowing more details. Normally it shouldn’t be a problem:

vertex shader:

```
varying vec3 direction;
uniform vec2 tan_FOV;
void main()
{
vec4 edge = gl_ModelViewMatrix * vec4(tan_FOV * gl_Vertex.xy, -1.0, 0.0);
direction = edge.xyz;
gl_Position = gl_Vertex;
}
```

This vertex shader assumes that you draw a single quad with vertices from (-1,-1) to (1,1), and with the tangent of half the FOV along x and y in the tan_FOV uniform.

In the fragment shader, you just normalize the direction, and pass the camera position in another uniform, and you have your ray.

Thanks dude.

I appriciate the response, this is really a nice forum.

Problem solved