Here is the problem definition:
Suppose you have a Quad face with a different normal for each vertex. Now we cut it because of a boolean operation. How shall we compute the new vertices normals? We are already using the face normal but the result is very unpleasant.
Is there a way to compute the contribuition of each original normal at the new vertex position and get a good guess?
The data available is of course the vertices position in 3D, the 4 original normal vectors and the face normal.
Thanks in advance,
You’ll later need code that does the same for texcoords and vtx-varyings, so a more general solution might be nicer from the beginning. Barycentric coordinates? Some helping Maths can be found in the nVidia presentation of correct gl_quad interpolation via geometry-shaders.
Also, I vaguely remember some Gradients maths from university, that handles this.
No, no need for texCoords yet, they are applied automatically later.
Isn’t is the same problem of finding the pixel color of the following -famous- OpenGL triangle?
If I knew how to compute the pixel color here I could also guess the normal everywhere in the surface above.
Interpolating a value across a triangle is really simple:
- transform the verts so that z=0 on each (move “camera” to look head-on onto the triangle). Now you have have 2D triangle that isn’t degenerate (unless the input was degenerate).
- split the triangle into 2 triangles, the division is a horizontal line that starts from the middle-Y vertex.
- for each of the 2 triangles (one pointing up, another pointing down):
3.1) interpolate values linearly across edges, for “Y scanline”. Interpolate linearly the two scanline-results by X value. => output result.
But as nVidia’s paper shows, we can’t just split quads and N-gons into triangles and expect nice interpolations.
Unfortuantely we need to work in 3D not on screen space.
Is it not like drawing the color picker surface as in the picture below:
In this case only three colors are involved but maybe it can be done also with four…
The method I gave for triangle-interp is rather 3D . The use of ‘scanline’ terms was just to help cross-reference with rasterizers .
The normal approach would be to use Phong normal interpolation. It’s essentially using linear interpolation of the vertex normals, which is then normalized. You can read up on it here: http://local.wasp.uwa.edu.au/~pbourke/texture_colour/interpolation/
Wow Lord, at the first sight it looks exactly what we were looking for.
Leave me the time to read it carefully.
Yes, it is exactly what we were looking for.
One point is by the way not clear to me in the suggested doc ( http://local.wasp.uwa.edu.au/~pbourke/texture_colour/interpolation/):
“One way is to choose a major axis by specifying a normal. The plane with this normal that passes though the point in question cuts two of the polygon edges, this is used as the extended line”
If I understand well we need to chose a plane to split two opposite Quad edges and get two new normals in these points.
How shall we chose this plane?
I must admit I’ve only used the barycentric version of this method. The way I interpret that passage is that you pick one of the normals consistently for all points P within the quad. Which normal you pick is arbitrary, but it should be the same for all points within the quad. Then you construct a plane with that normal, going through the point in question P. This plane then intersects two edges, and these edges should be used for the initial interpolation.
Please note the final remark, ie you should normalize after interpolating between the initial edges, then again after you interpolate between those two “edge-point normals”. I incorrectly stated only to do the latter step.
is that you pick one of the normals consistently for all points P within the quad
I really can’t understand this. Suppose you have a face laying on XY plane and you chose the same plane for splitting. You will not find any intersection point on edges. It is not so easy to select the right plane, what do you think?
Am I missing something?
Well, if all normals point in the Z direction (I’m going to assume the same Z direction, since the mesh should be “nice”), then there’s nothing to interpolate. If one of the normals does not point directly in the Z direction, pick it, and it will (should split the edges.
I’m not certain what would be a good criteria for picking the “splitting normal”. Perhaps pick the one with the greatest angle to the face normal? From what I can gather, this should work in all cases except when all normals equal the face normal, in which case there interpolated normal is the face normal as well.
You are right, it should work.
What about getting a normal on the edge. Can the following equation work?
normalOnEdge = d1 * N1 + d2 * N2 / 2
Where d1 is 0.2 and d2 is 0.8 (the relative distances from the vertices)
Regular linear interpolation should be used, so if you drop the division by two there it should work
The n1+n2/2 is not working too. It does not take in account the distances from original vertices. Far better to check the relative distance on the edge and split the line connecting normals endpoint at the same percentage.
Thanks so much again for your help.
Now hopefully we can implement the whole algorithm.
EDIT: You are right, we are saying the same thing.
You’re welcome. Hope it works out alright