Help with ray casting - near plane

Hi guys

I implementing the ray casting algorithm but I have a great trouble.

My basic idea was,

1.Compute the nearest vertex of bounding box and the farthest vertex for determine the maximum distance for the ray traversal.
2.Send the Min and Man values

1.In the shader, I am recovering the position the camera, then I calculate the ray direction as

vec3 dir= – gl_TexCoord[0].xyz;

and advance a positon along the ray.

2.Composite the final color

My problem is how to rendering the plane of image, I do as,


glTexCoord2f(0.0, 0.0 );
glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0, 0.0 );
glVertex2f(1.0, -1.0);
glTexCoord2f(1.0, 1.0 );
glVertex2f(1.0, 1.0);
glTexCoord2f(0.0, 1.0, );
glVertex2f(-1.0, 1.0) ;

  • My view port is 512x512

but noting see a image, and how to rendering if the case will be a endoscopy, by ex. (the near plane) ?


One option is not to use texture coordinates. If you just draw a quad over the entire screen, that will get you into the fragment program. There you need to have a camera position set from the CPU. You can then use the gl_FragCoord, what pixel you are at, to determine the direction of the ray. Then you just intersection the box inside your fragment shader.

You could try this method.
Worked for me.


Hi Toneburst

I read a Peters tutorial and tried to follow the process. I am working with glsl and create the texture2D as buffer for the final imagen rendered.
I estimate the first sample with gl_TexCoord[0].xyz, and calculate the direction of ray as
direction = normalize(gl_TexCoord[0];
The algorithm is similar as Real Time Volume Graphics.

Next, calling a function
render_buffer_to_screen(); //with square and texture2d coordinates

for show the result, I see one square (red blurred ), when i move a camera position, it behaves as one slice.

I would expect that the texture2D (frame buffer) behaves as image2D, such is updated by moving the camera.

What can I do ?


Thanks all

All is result.

OPENGL and GLSL, the perfect combination: easy to do ray casting, without any buffer, only rendering the front faces of cube.

Consider rendering only the back faces of the cube. When only the front faces are rendered, no fragments (and thus no rays) are generated when the viewer enters the cube.


I found I didn’t need to render front faces, either. I simple applied the raycasting shader to the front cube, passing a varying representing the interpolated screenspace coordinates from Vertex to Fragment shader, and using the previously-rendered back faces texture as ray end positions.


This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.