volume rendering , ray casting

i found 2 example shader codes for volume rendering using
ray casting. I have a question for getting the ra start position.
example A:

// find the right place to lookup in the backside buffer
float2 texc = ((IN.Pos.xy / IN.Pos.w) + 1) / 2; 
// the start position of the ray is 
float4 start = IN.TexCoord; 

eample B:

vec3 rayStart = texture2D(RayStart, gl_TexCoord[0].st).xyz;
vec3 rayEnd   = texture2D(RayEnd, gl_TexCoord[0].st).xyz;

My question is, why do they calculate following
((IN.Pos.xy / IN.Pos.w) + 1) / 2
in example A but not in example B =



Hi lobbel,

I had some fun with the same code, but rendering a surface equation dynamically, rather than a static volume texture.

There are some clips and more screenshots on my blog , if you’re interested.

I wonder if you’ve put any thought into optimisation of the basic algorithm?
I have some ideas, but I’m not sure how practical they are.



In the first code, the texture coordinates are computed from the fragment position which need to be perspective divided and translated to the [0 1] interval instead of [-1 1] one after perspective divide.

In the second code, he seems to fetch a 3D world position (not sure here since I do not the implementation details). I assume a 32 bits or 16bits floating point texture is used here so there is no need to transform the fetched data.

Have a look at this thread on raycasting, lobbel:


It doesn’t talk about the OpenGL setup, but the last lot of vertex/fragment shader code worked for me, in the end. You’d need to swap the surface code for lookups into a 3D texture, but the rest of the code should work without too much tweaking.

I should add that the shader runs on a backface-culled cube, taking the backface-culled cube as RayEnd input.

Hope that helps a bit.