 I’m trying to make a shader to calculate a 3D gradient vector for a set of 3 images from a continuous video stream. The idea is to represent the video frames as a 3D volume using raycasting, as in this example:
http://vimeo.com/8096416

The idea is that in the next version, this shader will be used to calculate a normal for lighting purposes, writing normal XYZ into the RGB channels of each Z-axis slice of the texture that’s input to the raycast shader, with intensity in the A channel.

In the code below, I’ve used 3 texture inputs, from the previous, current and next frames. I’m attempting to calculate a normal X and Y values by subtracting neighbouring texels on the X and Y axes in the current frame texture, and the Z by doing the same with values at the current coords in the previous and next frame textures. Here’s the code:

``````uniform sampler2D PreviousFrame, CurrentFrame, NextFrame;
const float texel = 0.01;

void main()
{
vec3 s0, s1, norm;
s0.z = texture2D(PreviousFrame, gl_TexCoord.xy).r;
s0.x = texture2D(CurrentFrame,  gl_TexCoord.xy + vec2(-texel, 0.0)).r;
s0.y = texture2D(CurrentFrame,  gl_TexCoord.xy + vec2( 0.0,-texel)).r;

s1.x = texture2D(CurrentFrame,  gl_TexCoord.xy + vec2(texel,  0.0)).r;
s1.y = texture2D(CurrentFrame,  gl_TexCoord.xy + vec2(0.0,  texel)).r;
s1.z = texture2D(NextFrame,     gl_TexCoord.xy).r;

norm = normalize(s1 - s0);
norm = 0.5 * norm + 0.5;

float alpha = texture2D(CurrentFrame, gl_TexCoord.xy).r;

//Multiply color by texture
gl_FragColor = vec4(norm, 1.0);
}
``````

The problem is, I seem to be getting a lot of black pixels in the resulting texture. Am I right in assuming this is going to be a problem when trying to use the normal to light my volume render, and if so, can anyone suggest any way to fix the problem?

Thanks a lot,

I’m wondering if it’s a divide-by-zero issue with the normalize() function. If any value is exactly zero, wouldn’t that screw up the normalization?

Anyone any ideas?

a|x

Walk back through your formulas, and send various components of dependent expressions down gl_FragColor.r. See what’s not right.

Check out isinf() and isnan(). Or just dot the vector with itself and test for 0.

Thank you, those sound like good suggestions. I will give them a got, thank you!

a|x

You’re computing the gradient there, which can indeed be the zero vector, and which should in general not be confused with the normal vector of a surface.

For a heightfield h(x, y) you find the surface normal by taking the cross product between dh/dx and dh/dy. I’m a bit at a loss at exactly which surface you’re trying to compute the normal for, could you perhaps elaborate?

Hi Lord crc,

it’s a bit obscure. It’s not a heightfield. Each frame above will become a z-axis slice in a Rolling 3D texture buffer (well actually a 2D texture atlas, but it works in the same way), which will be input to a simple raycast shader. I was hoping the be able to calculate gradients using another rolling buffer of three frames from a live video feed, and use these gradients in the raycast shader for simple lighting, rather than having to calculate normal gradients per-ray-step. The idea ultimately is to represent a live video stream as a 3D surface, with per-voxel lighting. The vimeo link above shows a similar setup, but without normal-calculation or lighting.

Hope this makes some kind of sense…

Right, took me a few moments, but I get your approach. You’re assuming the current point is part of a level set and compute the gradient of the point, which is of course normal to the level set.

The problem is, as you’ve noticed, that this only works for varying density functions. If you have a region with uniform density, then there is no defined normal inside that region from the above approach.

One workaround could be to store the normal from the previous iteration and use this if the current normal is undefined. Another alternative could perhaps be to use the divergence at the point as the transparency, so that points with no divergence (and hence no defined normal) would be invisible.