Hi guys- long time since I posted on here.
I’m working on a simple raycasting volume dataset renderer, and am trying to work out the best way to calculate normals for the isosurface. This
clip made me think that I could perhaps use a variation of the same method to apply lighting to the raycast surface as a post-process. My raycaster so far uses the technique outlined by Peter Triers in this post on his blog (but rewritten in GLSL, obviously)
- which I understand is a fairly standard way of doing it. My idea was, rather than imply accumulating colour and opacity as Peter’s setup does, to render the position of the first intersection point of each ray and a given isovalue into a texture (encoded into the RG and B channels), then calculate the normal and apply lighting in a post-processing 2D shader.
Does this sound like a good idea in principle, or would I be better off doing the lighting in the raycasting shader itself?
Would I have to do any kind of transformation on the ray-position before writing the values to the texture? I’m not planning to combine the volume render with any other 3D objects, so it’s not important that light-positions etc. match to an existing scene.
I’d ideally like to be able to rotate the rendering while the light remains in a static location, but I can potentially recalculate the light’s position based on rotation values outside the shader, if this will be more efficient than applying transformations per-pixel in the shader.
I know this is reallllly simple maths, but can someone tell me how to extract a usable normal from a ‘position texture’ like the one discussed above?