Calculating per vertex normals for a mesh


I am trying to experiment with GLSL lighting and in particular with implementing the Phong lighting algorithm. I am not sure if the shader implementation I have is wrong or not but I have to verify that the normals that are sent to the shader are correct.

My code takes a mesh of vertices and indices and calculates the normal with the following way I figured out by checking various websites around the net:

1. Iterates the triangles and calculates surface normals
2. Counts how many surfaces a vertex belongs to and sums all the surface normals each vertex belongs to
3. Using the above calculation the average normal of each vertex is calculated
4. It is reduced to a unit vector.

The above does not seem to be working. It is quite possible it might be an error in the code, and if so I will post code also but I figured I should first ask if the logic is correct first.

The result of this is some kind of strange lighting which makes the objects look transparent. I tested it with a simple rectangle and below you can observe the results when you are looking at it from the front. The light source is located above the eye coordinates(on our head).

Here is another example, the monkey model found in Blender, the open source 3d modeler.

In both cases the problem is that they all look transparent from some particular angle views. Anyone has any idea as to what might be the problem here?

You can merge steps 3 and 4 to simple normalization of the sum. Other than that, the basic idea should work.

To debug this, to see if your triangle normals are right, you could copy the triangle normal to each corner, however you would also need to duplicate each vertex for each triangle.

Do you have depth testing enabled? Is your back face culling setup correctly?

Now that you mention it tskuoran when doing the switch from fixed functionality to shaders it seems that I omitted depth testing and back-face culling. Thought I had it on.

How do you accomplish this when working with shader and openGL 3.0?

The code below is fixed functionality and produces a white screen now if I try to input it into my code.


The code below is fixed functionality and produces a white screen now if I try to input it into my code.

Code: glClearDepth(1.0f);

There is nothing there which produces white. The error is else where in your code.

I am sorry, I should have mentioned that white is the color I clear the background to, as can be seen in the above screenshots. So basically if I add the code above the objects do not get drawn.

I will have to check my code as to why and I suspect (and hope) that this is where I will find the problem in the original question too.

I found some mistakes in my implementation of the phong lighting shader so I switched to a much simpler shader which is able to showcase the problem better.

As for depth testing I had mismatching values in the glDepthRange() and in gluPerspective zNear and Zfar, this is why it seemed that depth testing was culling everything.

Now having corrected these, the problem of some primitives of the object being drawn over parts it should not draw persists. Anyone has any idea why is that? Here are two additional screenshots that might help clarify

Monkey from good angle:

Monkey from a bad angle:

Teapot from good angle:

Teapot from a bad angle:

Anyone has any pointers to give me? From the shader correction it is now evident that it is not the normals at fault for the problem but something else must be the cause. What would that be?

Looks like you do not have a working depth test, so basically, you end up with newly drawn polygons overwriting previous ones.

Do you have a depth buffer? Show us your context creation code (or if you’re using something like GLFW or FreeGLUT, show us what you’re using to initialize them).

Thank you for your replies. Yes it has to be the depth test but as far as I can see I am setting it up correctly:

    //back face culling

    //depth testing

As for the context creation code Reinhart requested it’s made by a system I am working on, and well since I am currently working on windows the code at the final level makes winapi calls.

So the main part of the code you would be interested in is the PixelFormatDescriptor. It’s basically taken from Nehe’s lessons

		sizeof(PIXELFORMATDESCRIPTOR),				// Size Of This Pixel Format Descriptor
		1,											// Version Number
		PFD_DRAW_TO_WINDOW |						// Format Must Support Window
		PFD_SUPPORT_OPENGL |						// Format Must Support OpenGL
		PFD_DOUBLEBUFFER,							// Must Support Double Buffering
		PFD_TYPE_RGBA,								// Request An RGBA Format
		32,										    // Select Our Color Depth
		0, 0, 0, 0, 0, 0,							// Color Bits Ignored
		0,											// No Alpha Buffer
		0,											// Shift Bit Ignored
		0,											// No Accumulation Buffer
		0, 0, 0, 0,									// Accumulation Bits Ignored
		16,											// 16Bit Z-Buffer (Depth Buffer)
		0,											// No Stencil Buffer
		0,											// No Auxiliary Buffer
		PFD_MAIN_PLANE,								// Main Drawing Layer
		0,											// Reserved
		0, 0, 0										// Layer Masks Ignored

I also have multisampling support so at some point I am remaking the window with a similar pfd used for multisampling which still has the same depth buffer size of 16.

I think the problem is in me not understanding how depth testing actually works. I believe I set it up to work correctly but my values are wrong because I do not understand how it works. What exactly does depth range do?

My gluPerspective call is like this, with e being the model currently drawn.


and that is why my glDepthRange call is with the same zNear and Zfar:


But If I am to be honest that is pretty arbitrary. I am guessing as to how it might work. Is the mistake here? How exactly does depth testing work? Any links to any article explaining it sufficiently and also explaining how to setup the right parameters?

I was also wondering … is there any way to not do it with calls from the main program but somehow move it in a shader?

Thanks for taking the time to reply.

Maybe the problem is not with depth testing but with the order of vertices in the triangles.

Have you tried turning back-face culling off or changing the direction from CCW to CW (or vice-versa)?

DepthRange should be 0.0 to 1.0, not 1 to 100.

That is possible. All the models I take are from blender, I am reading the .3ds format vertices and indices.

I tried to turn off back-face culling and this is what I get:
torus no backface culling

monkey no back-face culling

This is quite strange. Does it point to anything?

As for switching to CW drawing order this is what happens:
monkey with CW order

torus with CW order

HA! You found it! That was the culprit. And if I understand correctly this was also the default value. So if I had not called that function none of this would have happened.

I was under the misconception that you needed to input the same zNear and zFar as gluPerspective there. Thanks a lot again! Do you have any article/s where I can read and understand better about how depth buffer works? If I knew I would have avoided this problem

The OpenGL specification itself is “the” reference. Alternatively, a quick google shows up these:

Your DepthRange() values got clamped so it did DepthRange(1, 1), which maps all depth values to 1.0. Typically this is not what you want. However, with it you can selectively reset depth buffer to far plane, effectively doing a partial clear.

Thanks for the links and the explanation. Will help me avoid similar mistakes in the future I suppose.

Do you have any article/s where I can read and understand better about how depth buffer works?

Yes, I do. I even have a note in there about not confusing the range zNear/zFar with the perspective matrix’s zNear/zFar.

If you are using some VBO’s , check whether you have attached the both the color and depth components… Failing to do so will cause some strange Z fighting…