Surface Normal Function - Not really sure...

Hey, I’m new to these forums

I have a problem with my code and I decided to ask the pro’s with this (my uni lecturer simply doesn’t understand what I want lol).

Anyway, I got a fragment of code to work from as it was the most understandable and “usable” piece of code I could find on how to create surface normal vectors. Every site I’ve come across has ridiculous algebra which I simply don’t understand. If someone could adapt the following piece of code to at least look like it would work, or explain how to create normals and normalise them.

Firstly, a little explanation. The object I am trying to create is made through a vertex array. I am unsure as to how to (or even if I can) put normal vectors into the array. I got this far and now I am stumped as to where to go or how to adapt this code. The normal part is a section of code I got off the Internet from, and I don’t know if this could be adapted to suit my project.

Any help would be great :smiley:


// Code adapted from ""
// Credit goes to Ben Woodhouse

void getFaceNormal(GLfloat *norm,float pointa[3],float pointb[3],float pointc[3])
	float vect[2][3];
	int a,b;
	float point[3][3];

	for (a=0;a<3;++a)
		point[0][a]=pointa[a];		//copies points into point[][]

	for (a=0;a<2;++a)				//calculates vectors from point[0] to point[1]
	{								//and point[0] to point[2]
		for (b=0;b<3;++b)
	crossProduct(norm,vect[0],vect[1]);				//calculates vector at 90° to to 2 vectors
	normalize(norm);								//makes the vector length 1

void crossProduct(float *c,float a[3], float b[3])	//finds the cross product of two vectors
	c[0]=a[1]*b[2] - b[1]*a[2];
	c[1]=a[2]*b[0] - b[2]*a[0];
	c[2]=a[0]*b[1] - b[0]*a[1];

void normalize(float * vect)		//scales a vector a length of 1
	float length;
	int a;

	length=sqrt(					//A^2 + B^2 + C^2 = length^2

	for (a=0;a<3;++a)				//divides vector by its length to normalise

// ******** SCENE ITEMS ******************************

void girderBoxNormalTest()

	typedef GLfloat point3[3];
	point3 vertices[8] =	{{-2.4,-0.05,-0.05},{-2.4,-0.05,0.05}, 
	point3 normals[18];
	point3 color[8] =	{{0.0,0.0,0.0},{1.0,0.0,0.0},

	GLubyte cubeIndices[24]={0,1,2,3, 1,5,6,2, 6,7,4,5, 7,3,0,4, 3,2,6,7, 5,4,0,1};

	glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, cubeIndices);


Planar surface normals are very easy to define. Given three non-collinear points, they define a plane. Choose one of the points as the reference point. Take the cross product of the two vectors originating from the reference point to each of the other two points. That is the surface normal. Dividing any vector’s coordinates by the length of the vector normalizes it.

(Smoothly varying surface normals over a faceted surface require an averaging step at each vertex following what I described above. The code you provided does not implement smoothly varying surface normals.)

If you are using anything approaching capable modern graphics hardware, then you should be able to use a geometry shader to calculate the planar surface normals of all your triangles. This is vastly superior to calculating them on your CPU, storing them with your vertices, and then having to rotate them.)

In a geometry shader, calculating normalized planar surface normals is as simple as:

vec3  normal;
vec3  vector1;
vec3  vector2;

vector1 = gl_in[1] - gl_in[0];
vector2 = gl_in[2] - gl_in[0];
normal = normalize (cross (vector1, vector2));

Thanks for replying.

So if I for example took the first and second vertice co-ordinates here:


Vector v1 = 5,4,2
Vector v2 = 8,6,4
Vector v3 = y1z2 - y2z1 , z1x2 - z2x1 , x1y2 - x2y1
= [((44) - (62)), ((25) - (45)), ((56) - (54))]
= 4,-10,10

Is that right?

I’m pretty bad when it comes to maths, theres a reason why I dropped it before A-Level haha.

Vertices and vectors aren’t the same thing.

Say you’ve got three vertices (one at each corner of a triangle), like:
vertex1 = (3,7,1)
vertex2 = (5,4,2)
vertex3 = (5,6,4)

Let’s arbitrarily choose vertex1 as our reference vertex. So, the vectors emanating from it to each of the other two vertices are:
vector1 = vertex2 - vertex1
vector2 = vertex3 - vertex1

So, using the numbers for the vertices in our example:
vector1 = (2,-3,1)
vector2 = (2,-1,3)

normal = cross (vector1, vector2)
which is:
normal = (y1z2 - y2z1 , z1x2 - z2x1 , x1y2 - x2y1)
normal = (-33 - (-1)1, 12 - 32, 2*(-1) - 2*(-3))
normal = (-8, -4, 4)

That is the normal, but it isn’t normalized. To do that, calculate the length (sqrt (xx + yy + z*z) = sqrt (64+16+16) = sqrt (96)) and divide the x, y, and z of the normal vector by that number. You end up with:
normalized normal = (-8/sqrt(96), -4/sqrt(96), 4/sqrt(96))

Ohh I see. I was wondering if mine was right or not, it would appear not lol.

I could implement this easily enough now that you explained it to me, but how would I “extract” those vertex co-ordinates from my vertex arrays?

And then how would I post the new normal vectors into the normal array?

But thankyou so much for helping me, you’ve explained this a lot better to me than anywhere else on the web :slight_smile:

First off, I don’t know what version of OpenGL you are using but I can see it’s an older version. I am using OpenGL 4.0, which is pretty recent. The sample code you presented uses the GL_QUAD primitive. The quad primitive has been done away with for OpenGL 4.0 (or at least it has in the “core profile,” which just means it’s not attempting to be generally backwards compatible). Instead, you will typically use some type of triangle primitive.

I’m assuming you want to use a geometry shader to calculate the normal vectors? If so, then you will have a vertex shader (which is executed by the graphics pipeline first and works on individual vertices the pipeline has taken for you from your vertex buffer without any knowledge of any other vertices; the main thing a vertex shader does is transform vertices from object coordinates to eye coordinates with a WorldView matrix), then the graphics pipeline collects all three transformed vertices for each triangle and sends them as a group to the geometry shader (which calculates the surface normal from them, and then also transforms the vertices (but not the normal) with the Projection matrix to clip coordinates), and finally the graphics pipeline rasterizes the triangles and sends the individual pixel fragments one-at-a time to the fragment shader (which can access the surface normal from the geometry shader to calculate the lighting). There never is an array holding the normal vectors; they aren’t kept but instead they are simply passed as they are computed in the geometry shader to the fragment shader. (Actually, the graphics pipeline interpolates between the values of all the variables (including the normal vector) at each vertex when it rasterizes a triangle. In our example, the normal vector is the same at each triangle vertex, so the interpolation doesn’t affect anything.)

All three shaders have to work together to make a single program. Here’s an example using OpenGL 4.0:

///////////////// vertex shader: /////////////////
#version 400 core
uniform mat4  modelview;  //from application program.
in vec3       vertex;     //from vertex buffer from app pgm.

void main (void) {
  gl_Position = modelview * vec4 (vertex, 1.0);

///////////////// geometry shader: /////////////////
#version 400 core
layout (triangles) in;
layout (triangle_strip, max_vertices = 3) out;
uniform mat4  projection;  //from application program.
out vec3      normal;      //to fragment shader.

void main (void) {
  vec3        vector1;
  vec3        vector2;

  gl_Position = gl_in[0].gl_Position;
  vector1 = gl_in[1] -;
  vector2 = gl_in[2] -;
  normal = normalize (cross (vector1, vector2));
  gl_Position = projection * gl_Position;
  EmitVertex ();

  gl_Position = projection * gl_in[1].gl_Position;
  EmitVertex ();

  gl_Position = projection * gl_in[2].gl_Position;
  EmitVertex ();
  EndPrimitve ();

///////////////// fragment shader: /////////////////
#version 400 core
in vec3   normal;  //from geometry shader.
out vec4  color;   //to framebuffer.

void main (void) {
  color = vec4 (normal, 1.0);

The fragment shader above doesn’t use the normal vector the way it typically would (to calculate how light illuminates the pixel fragment), but it does show how it is accessed from the geometry shader.

There’s really quite a bit to OpenGL. One of the complicating issues is that there are a number of different versions, and some of the versions are quite different from other versions. Most of the examples and tutorials on the internet (and in books, for that matter) are from older versions. Different people have different opinions about how people should learn OpenGL (I mean which version to start with), but in any case, you should go in with your eyes open knowing that there are other versions to confuse you. I don’t know what your university lecturer is teaching you and I don’t mean to interfere, but I’m only interested in more-or-less current OpenGL so that’s what I’ve shown you here. It might not fit in with the rest of what you’re getting elsewhere. Anyway, for comprehensive instruction on OpenGL, you really do need to rely on something more comprehensive than a forum like this can be, and it would be simpler for you if what you receive is consistent for one version of OpenGL.

I guess it is kind of silly that our lecturer is teaching us an old version of OpenGL, especially when the subject is focussed on computer graphics.

I don’t really know how I could use the code sample you gave me, but I saved it in a file for future reference.

Thankyou so much :smiley: I’ll say again, you’ve been more help to me explaining how to calculate normals than anywhere else I have seen.