I’ve been working with OpenGL for about 6 months now and love it! However, I’ve hit a stumper. I’m working on a terrian engine. I average all the normals for lighting (per-vertex) and get satisfactory results. I plan on weighing the normals next version for even better results. But, when it comes to coloring, to indicate different elavations, I’m getting an ugly side effect when 1 or 2 vertices in a triangle differ in color (I’m using triangle strips). For example, if one vertex is blue and the other two are cyan, smooth shading seems to add an incline, which is not really there, and you can make out a triangle (actually, two-thirds of it). The only solution I can think of is–which I’m not sure is even possible–to access the fragment for each pixel, extract the Y screen component, and color the pixel based on this value (instead of coloring the vertex).
Am I lost in the depth buffer or am I making this harder than it actually is?
Thanks, in advance.