Calculate Normal From Position Texture

Hi guys- long time since I posted on here.

I’m working on a simple raycasting volume dataset renderer, and am trying to work out the best way to calculate normals for the isosurface. This

http://vimeo.com/7793893

clip made me think that I could perhaps use a variation of the same method to apply lighting to the raycast surface as a post-process. My raycaster so far uses the technique outlined by Peter Triers in this post on his blog (but rewritten in GLSL, obviously)

http://cg.alexandra.dk/2009/04/28/gpu-raycasting-tutorial/

  • which I understand is a fairly standard way of doing it. My idea was, rather than imply accumulating colour and opacity as Peter’s setup does, to render the position of the first intersection point of each ray and a given isovalue into a texture (encoded into the RG and B channels), then calculate the normal and apply lighting in a post-processing 2D shader.

Some questions:

  1. Does this sound like a good idea in principle, or would I be better off doing the lighting in the raycasting shader itself?

  2. Would I have to do any kind of transformation on the ray-position before writing the values to the texture? I’m not planning to combine the volume render with any other 3D objects, so it’s not important that light-positions etc. match to an existing scene.
    I’d ideally like to be able to rotate the rendering while the light remains in a static location, but I can potentially recalculate the light’s position based on rotation values outside the shader, if this will be more efficient than applying transformations per-pixel in the shader.

  3. I know this is reallllly simple maths, but can someone tell me how to extract a usable normal from a ‘position texture’ like the one discussed above?

Cheers guys,

a|x
http://machinesdontcare.wordpress.com

An isosurface is defined as f(x,y,z) = Constant, so the normal to that surface is the grad of f(x,y,z), which is the direction towards which there is maximum ‘flow’ (if f(x,y,z) represents the ‘fluid density’, that is). So, in theory, a way to extract a normal is to calculate grad(f) which is simply df/dxi + df/dyj + df/dz*k. The derivatives can be calculated by taking discreet samples in the 3D texture(which acts as a sampler for the f function, obviously), and interpolating by some algorithm, linear, cubic or other. “Numerical recipes in C” has some sections on derivative reconstruction by sampling discreet points in a lattice(or in 3D parlance 3D texture). Good luck.

Thanks for getting back to me, Y-tension.

So you’re suggesting it’s easier to do the normal-estimation in the raycast shader, then?

I’ve found some code in this raycast shader to calculate normals in the way you describe (I think)

// Compute the Normal around the current voxel
 20vec3 getNormal(vec3 at)
 21{
 22    vec3 n = vec3(texture3D(VolumeData, at - vec3(cellSize, 0.0, 0.0)).w - texture3D(VolumeData, at + vec3(cellSize, 0.0, 0.0)).w,
 23                  texture3D(VolumeData, at - vec3(0.0, cellSize, 0.0)).w - texture3D(VolumeData, at + vec3(0.0, cellSize, 0.0)).w,
 24                  texture3D(VolumeData, at - vec3(0.0, 0.0, cellSize)).w - texture3D(VolumeData, at + vec3(0.0, 0.0, cellSize)).w
 25                 );
 26    
 27    return normalize(n);
 28}

http://www.siafoo.net/snippet/140

It’s assumed intensity data (from a CT scan, for example) is stored in the 3D texture’s alpha channel, I think.

I’m not sure why I thought it would be a good idea to defer lighting to a second pass. I think I assumed that would give me more potential options later on, like implementing some kind of DoF effect, for example. I’d still love to know how to generate a normal as a post-process, as described above. I’ve seen some pseudocode for it somewhere, and I know it was very simple, and based on, I think, 4 texture lookups per-pixel.

Thanks again,

a|x

The shader looks fine for a linear interpolation algorithm.

Actually you can do just about anything, if you have your normals calculated. For DoF you just need a second pass and the depth information available somehow(a texture, or even the depth buffer itself if on DirectX 10 hardware)

Normal as a post process is simple if you have depth information available. It is the vector that is perpendicular (cross product!) to {dx, 0 dz/dx} and {0, dy, dz/dy} where z is depth and x,y your screen-space coordinates. I think GLSL has derivative instructions to calculate texture derivatives in screen space although I’m not sure about depth itself.