problems with phong lighting

i have some problems with local illumination in volume rendering.
For ray setup I draw a color coded cube twice. first with backface culling and second time with front face culling.

the colors represent the entry and exit points. In a shader I
build the rays and traverse throught the volume (3d texture).
For lighting I calculate the gradients(normals) with the central difference method. The gradients are in volumes object space. The cube’s dimension is from 0,0,0 - 1,1,1. So the gradients, that are in volume’s object space are
within a normalized range of [0,1]. For phong lighting I also need the light vector and the viewing vector. I want that the light source is fixed to the camera. So light vector and view vector should be the same.

I thought I can use the ray’s direction as my view/light vector. But the lightin doesnt work properly. there’s almost
no shading.

Here are some pictures. in the second picture you can see some small specular highlighting and in the last you can see some more specular lightin as it should be. In the first to pictures I used the raydirection computed in the shader as
view and light direction.

vec3 phongShading(in vec3 dir, in vec3 pos, in vec3 normal, vec3 ka, vec3 kd, vec3 ks) 
    vec3 N = normalize(normal);
    vec3 L = dir;	
    vec3 V = normalize(dir);

In the last picture I passed the camera position into the shader as camera position and light
position. And then in the lighting shader I calculated the
view and light direction with

vec3 phongShading(in vec3 pos, in vec3 normal, vec3 ka, vec3 kd, vec3 ks) 
    vec3 N = normalize(normal);
    vec3 L = lightPos - pos;	
    vec3 V = normalize(cameraPos - pos);

Previously I used local lighting in
texture sliced volume rendering and it worked fine. Now, im implementing local lighting in raycasting. Actually it’s not that hard, but there is something I mixed up did wrong.

Can some give me a hint I may did wrong or missed to do ?


My suggestion is simple: Do everything in eye-space. In eye-space the camera is at the origin and you are looking down the negative Z-axis. You describe that your light source is co-located with the camera, so it is then also at the origin in eye-space.

Transforming normals from world-space to eye-space involves applying the Normal Matrix. If you are using an older version of OpenGL this matrix is available to you in both vertex and fragment shaders.

Another note is that when you compute directions in the vertex shader, that get interpolated and passed to the fragment shader, it is a good idea to normalize directions again, since interpolation does not preserve vector length.

Finally, make sure you have all directions in the “right direction”. Sometimes flipping a direction may be required in order to obtain the correct vector.

Good luck, you are probably very close to solving it!

Hello thanks for the answer. But this is volume rendering.
There are no polygons. The picture is created by raycasting.
I get the normals in the fragment shader by applying central differences. The values are stored in a 3d texture.
The gradients (normals) are in object space. And that’s what
I’m confused about when doing lighting. The normals are in object space and I wonder if the directions are also in object space ?

As you can see here

The cube’s vertices are coded with colors. Substract the backface from the frontface gives the directions because the
vectors are the RGB values.

Moving the camera also moves the cube, so could it be that the
directions are in eye space ? The Lighting itself works, but
it calculates the lighting with vectors from different spaces.


Hello thinks, thanks :smiley:

I forgot to normalize the gradient and forgot to inverse the direction of the view. Now it works.
Another question, now that the light source is co-located with the
camera how can I do that the light always points to the object ?
When I look at the object I can see the specular light spot.
when I move the object left or right the spot moves in the opposite direction. How can I fix this ?


The light direction to a certain pixel would be the same as the eye-direction. You will need to compute these. How to do this depends a lot on how you are doing other things… but if you do one you get the other, since they are the same in your case.