I have this problem I can’t seem to solve.
I’m coding a volume renderer based on the article by westerman and kruger “acceleration techniques for volume rendering”.
My implementation of the algorithm uses OpenGL and GLSLANG rather that DX and HLSL.
A 3D texture stores the density field together with its normalised gradient. The coordinates of the gradient are expressed in local normalised voxel space coords.
The proxy geometry that is used is a cube, each vertex of the cube is colored accoring to its coordinates in voxel spaces RGB encodes XYZ coords in local voxel space. the first vertex is black, meaning it lies at the origin of the voxel space. its diagonally opposed vertex in the cube is pure white, meaning it lies at the point 1,1,1 in normalised voxel space.
The coordinates of the rays through the volume are generated by substracting the image of front faces of the cube from the image of the back faces of the cube.
The geometry that is sent through the pipeline at each iteration of the ray casting loop is just the front faces of the proxy cube, and each pass triggers a few texture fetches and alpha blending along the ray.
Now if I want to perform volumetric lighting computations, the gradient of the 3D density field yields a normal vector in local voxel space, but I also need to retrieve the light and eye vectors in local voxel space. I can’t work out how to do this in the vertex shader, as the position of the vertices of the proxy cube in world or eye space are not directly related to the coordinates of the same vertices in local voxel space.
for the moment I’m passing the light vector in local space directly to the shader as a uniform, but this one is view independent.
I need to convert the eyevector normally obtained as -normalize(gl_Vertex*gl_ModelViewMatrix) to local voxel space, which involves a few additional rotations and translations.
Any ideas? remeber this is volumetric lighting and not traditional surface lighting.