Per vertex lighting vs per pixel lighting

I’ve quickly searched for that question but it seems it has not been discussed, so here we go:

  1. Does a shader that implements lighting only in the fragment shader obligatory realizes per pixel ligthing ? If not, what is necessary to implement such a technic ? I’m thinking I’ve realized per pixel lighting but I can’t ensure it since all the information I have doesn’t speek of those terms.

  2. What’s the faster solution (in term of rendering performance) between those 2 technics ?
    Can a mixture of those 2 technics be realized AND so that it can be faster than each of those alone ?

  3. Which one provides the best appearance, which one best respects the lighting physics law (I use the Phong/Blinn shading model) ? According to what I know, I’d say vertex lighting should best respects the lighting model because fragments may lack some information like good normal interpolation. Is it true ?

Well from your post it seems that you dont know how the color is computed in per-vertex and in per-pixel lighting.
The main difference is that in per-vertex lighting the color is computed for each vertex and then it is interpolated between vertices. In per-pixel ligting normals are interpolated between vertices and the color is computed on each fragment.

So for your questions:

  1. No. To implement per-pixel lighting you have to pass normal of each vertex as varying variable to the fragment shader and then compute the lighting there. ( usualy you also pass the light direction as varying but the most important thing is to use interpolated normal vector )

  2. Definitly per-vertex ( unless some special cases ). Computing the color for each fragment is usualy much slower then computing it only for each vertex and interpolating it.

  3. From what I said at the begining it should be clear that the per-pixel lighting is the much more realisting solution from these two. :slight_smile:

Try this page: http://www.clockworkcoders.com/oglsl/tutorials.html

they have some per-pixel lighting tutorials there

So, with what you mean I understand that per-vertex ‘coloring’ interpolates the color regarding the vertex (and doesn’t care about normals). So, if the surface is a curve, interpolation will remain flat. But per-fragment ‘coloring’ uses the normal for making a best match between the rendered surface (and so the fragment) and the real apperance of the surface. That’s what I was thinking of it. But you’ll may say that’s not really right.

So, let’s see for the points.

  1. You say the main thing is to interpolate normal vectors. Okay. I already use normals, but I must admit that I don’t interpolate them myself. In fact, that was the one of the think I found strange. I was (surely wrongly) thinking that GL interpolates them on its own. However, I use varying variables for passing the normals from the vertex to the fragment shader. I must also admit I don’t really know how to interpolate an incoming normal from within a fragment shader.

  2. Okay. But I don’t understand some points. How to use the auto-calculated lighting values (like the half-vector) besides the vertex shader ? Do you need to calculate it ? If so, then does GL still calculate the half-vector (and so) for the fragment ? (since, as far as I’m not wrong, the fragment shader uses eye-view coordinates whereas the vertex shader doesn’t).

  3. This, if I’m not wrong, belongs to the fact that default fragment shaders uses linear interpolation, which is usually wrong, but faster.

Actually, from what we said, I say I do per-pixel lighting (depending on the fact that we must explicitly do the normal interpolation or not inside the fragment shader).

Finally, I saw the link you stippled, I already knew it, but maybe I didn’t read it enough…

Thank you for your help.

  1. Of course the interpolation is done automaticaly by the GPU. But from your last sentence I assumed that you are not sending it as varying ( you wrote: “fragments may lack some information like good normal interpolation” )

  2. You can calculate your lighting in any coordinate system you want. The only important thing is to set a correct value into gl_FragColor :slight_smile: . You have to transform all your data to the right coodrinate system.
    Half-vector is calculated for each vertex and it is already calculated in eye-space.
    You have to pass it into the fragment shader as verying variable.

  3. Yes but the result are quite correct ( the normals are usualy normalized again in the fragment shader ).
    If you want more direct control over your normals you can use normal maps.

btw. probably better tutorials about GLSL lighting are there:
http://www.lighthouse3d.com/opengl/glsl/index.php?lights

both vertex lighting and pixel lighting are described there even with some background theory

Okay.

  1. If the interpolation is done automatically, I can say I use per-pixel lighting. I was sending it as a varying lighting, I actually don’t know any other ways to do that.
    The main fact is that, for me, a normal isn’t the same for each fragment, as they don’t have the same postion. So, a normal for a vertex is only ‘well’ for the single matching fragment (the one corresponding to that vertex). A fragment, let’s say, at the middle of a triangle, cannot have the same normal as the 3 vertices that are defining the triangle: that turns out for me.
    I wrote that ‘fragment may lack some information like good normal interpolation’ just because I didn’t see any way for a fragment to know how the surface should be (if it’s flat or curved and how) and linear interpolation is often wrong.

  2. Sorry, but I know that: that’s mainly a mathematical issue :wink: you can’t do calculations that come from different coordinate systems, that’s the rule of thumb.
    So, if I decide to use per-vertex lighting, I must calculate all the needed information (like half-vector) and send them to the shader whereas, if I use per-pixel lighting, all the calculations are automatically done by GL. I understand that point well now.

  3. You stippled an interresting point: normal maps. I already have heard about them, but I don’t know anything about them. Maybe the link you gave is talking about them, I’ll see that, hopefully tomorrow.
    But actually, my main question about this point is whether a normal map gives best normal interpolation than those calculated by GL (done with only linear interpolation) ?

I’m sorry, since after all these years, I’m still not really good when expressing my english. Well, I often forgot to tell main things.

Thank you again, you helped me.

regards

  1. When you use both vertex and fragment shader the lighting is never calculated automaticaly. Difference between per-vertex and per-pixel lighting is only where it is calculated. So for example when you calculate the diffuse part of the lighting you will write something like this:

diffuse=max(dot(normal, lightDir), 0.0) * diffuseColor;

If you are using per-vertex lighting you write this in vertex shader, if per-fragment lighting then you write this into fragment shader…

But all of this is quite well explained on the link I gave you so I’m sure you will soon know what to do.

  1. I’m afraid that the normal maps are not explained on the link I gave you but there are a lot of other sites on the net about them :slight_smile: . For your question… normal maps dont provide any interpolation they are just textures mapped on a surface. If the normal map is in tangent-space it represents offset of the normal from the normal calculated by the GPU. If the normal map is in object-space it stores the normal for given fragment so it can be directly used for lighting calculation.
    But believe me, the linear interpolation done by the GPU is more than enought for the per-pixel lighting. You just have to normalize the interpolated normal vector.

I have the same problem with the english as you :slight_smile: , but to be honest I have this problem even when speaking/writing in my mother language :slight_smile:

A simple “ndotl” lighting equation will have little or no difference in the final output if the normal is interpolated per-vertex and ndotl factor is calculated in fragment shader, or if the ndotl factor is calculated per-vertex (in vertex shader) and interpolated and used in fragment shader. That

Here is a normal being interpolated per-vertex

// vertex shader
varying vec3 normal;
void main()
{
    gl_Position = ftransform();
    // assuming world space
    normal = gl_Normal;
}

// fragment shader
varying vec3 normal;
uniform vec3 lightDir;
void main()
{
    float ndotl = clamp(dot(normal, lightDir), 0.0, 1.0);
    gl_FragColor = vec4(ndotl, ndotl, ndotl, 1.0); 
}

here is the ndotl being interpolated per-vertex

// vertex shader
varying float ndotl;
uniform vec3 lightDir;
void main()
{
    gl_Position = ftransform();
    // assuming world space
    ndotl = clamp(dot(gl_Normal, lightDir), 0.0, 1.0);
}

// fragment shader
varying float ndotl;

void main()
{
    gl_FragColor = vec4(ndotl, ndotl, ndotl, 1.0); 
}

I wrote this code off the top of my head, and didn’t compile it for errors, but you should get the idea.

To Trahern:

Well, doesn’t GL provides auto-calculated values inside gl_LightSource[x] values regarding the eye-view coordinate system ? I’m pretty sure they are.

I’ll see for the normal maps too. I know that linear interpolation gives ‘good’ results. But I’m pretty sure we can do best renderings without much more gpu usage (I just hope normal maps will be able to help me in this task).

( About our languages: I must admit that when speaking/writting in english, I always think in english directly, not in french. So this might explain why there are some inconsistencies :slight_smile: )

To Zulfiqar:

What do you mean ? Should I prefer per-vertex lighting calculations instead of per-fragment as it’s faster and the differencies are almost not noticeable ?
Okay for that view, but the main problem I see with that solution, as I’ve surely talked about in my previous posts, is that I must do most of the calculations for eye-view coordinates whereas in fragment shaders, I’m already in eye-view coordinates. But maybe you were speaking of another thing (I’m still a beginner in shaders).

Originally posted by jide

What do you mean ? Should I prefer per-vertex lighting calculations instead of per-fragment as it’s faster and the differencies are almost not noticeable ?

If you are doing a simple ndotl with per-vertex normals and nothing fancy like normal mapping, or per-pixel light attenuation etc. then you can do your calculations per-vertex and pass interpolated results to fragment shader. Considering that you are using a fixed light direction (like directional light) then it doesn’t matter whether you use interpolated normal or interpolated ndotl, the result will be quite the same.

Originally posted by jide

Okay for that view, but the main problem I see with that solution, as I’ve surely talked about in my previous posts, is that I must do most of the calculations for eye-view coordinates whereas in fragment shaders, I’m already in eye-view coordinates. But maybe you were speaking of another thing (I’m still a beginner in shaders).

You are in no particular space in fragment shaders! Look, there is one type of data that you can get in vertex shaders i.e. uniforms. In fragment shaders you can have uniforms as well as varyings. Uniforms are application specified variables, and can be in any coordinate space. Varyings will be in the space in which they were in vertex shader. The interpolator will simply linearly (the linear interpolation is perpective “correct”) interpolate values between vertices and provide per-fragment values in the varying variables.

Originally posted by jide:

Well, doesn’t GL provides auto-calculated values inside gl_LightSource[x] values regarding the eye-view coordinate system ? I’m pretty sure they are.

Yes they provide you some precomputed values ( like half-vector ) but you still have to compute final lighting yourself…

To your question for Zulfiqar… it may be true that the diffuse part of the lighting can be computed in vertex shader without significant quality loss ( well I have not test it so you will have to try it yourself ), but the most problematic part of the per-vertex lighting is usualy the specular lighting ( highlights ).

And if you want to implement effects like bump-mapping then you will have to calculate even the diffuse part of the lighting in the fragment shader.

Zulfiqar you wrote: “You are in no particular space in fragment shaders!”
You’re right, but what I was trying to mean is that all the information calculated by GL for the fragment shader (like half-vector) is done in eye-view space.

Trahern, could you develop why specular lighting could be a problem on vertex-lighting ? At this point, I don’t see why.

To both of you, I now understand more all the process, so I’ll surely (soon or later) also implement a vertex-lighting version to try out efficiency and difference between the two rendering results.

Thank you for having taking time for my problems :slight_smile:

The problem with the specular lighting in per-vertex lighting is that it usualy looks really ugly on low-poly models. I’ll show you what I mean ( just a little example with a point light+low poly sphere )

Per-Vertex Lighting:

Per-Fragment Lighting:

Wireframe:

Usually it’s much worse than these screenshots. The highlight exactly at a vertex is the best possible case. When the highlight is inside a polygon it will disappear completely with vertex lighting.

Things get even worse when the objects and lights move…

Can it be due to bad interpolation ? I guess so.

Well, yes…the specular color is computed on each vertex and then its interpolated between them. But you simply can’t obtain better result with per-vertex lighting unless you make the model more hi-poly. And as Overmind stated this is the best possible case. When the highlight is between vertices then it will look like this:

Per-Vertex:

Per-Fragment:

believe it or not the screenshots are taken from the exact same location/light direction.
On per-vertex lighted image you cant see any highlight becouse the highlight should be between vertices but it is calculated only for each vertex and then interpolated but in this case no vertex has any highlight on it so there is nothing to interpolate.

I believe you Trahern. But that’s start up my torment again :slight_smile: Should I do per-vertex lighting or not ? Anyway, this is a question I’m the only one who can answer to, but at this moment, I’d choose keeping per-fragment lighting. And I also need to see that story about normal maps, I’ll try to find out some info about that, always hoping they would help in interpolating.

Originally posted by jide

I believe you Trahern. But that’s start up my torment again :slight_smile: Should I do per-vertex lighting or not ? Anyway, this is a question I’m the only one who can answer to, but at this moment, I’d choose keeping per-fragment lighting. And I also need to see that story about normal maps, I’ll try to find out some info about that, always hoping they would help in interpolating.

Yes jide, you are the best person to make a decision. Trahern is right. Per-vertex lighting usually gives bad results in comparison to per-fragment lighting. I was trying to clear up a few things regarding the use of interpolators :slight_smile: . It is useful to know the details about them since you will definitely find out one day that your lighting shader is giving really poor performance :slight_smile: , and then you will try to optimize it, and then will be the time when you will need to shift some calculations to vertex shader and that’s when you need to know how you can use interpolators to minimize per-fragment calculations with little or no visual difference.

Originally posted by jide

I believe you Trahern. But that’s start up my torment again :slight_smile: Should I do per-vertex lighting or not ? Anyway, this is a question I’m the only one who can answer to, but at this moment, I’d choose keeping per-fragment lighting. And I also need to see that story about normal maps, I’ll try to find out some info about that, always hoping they would help in interpolating.

Yes jide, you are the best person to make a decision. Trahern is right. Per-vertex lighting usually gives bad results in comparison to per-fragment lighting. I was trying to clear up a few things regarding the use of interpolators :slight_smile: . It is useful to know the details about them since you will definitely find out one day that your lighting shader is giving really poor performance :slight_smile: , and then you will try to optimize it, and then will be the time when you will need to shift some calculations to vertex shader and that’s when you need to know how you can use interpolators to minimize per-fragment calculations with little or no visual difference.
Secondly, Trahern’s pictures are relevant to area lights like point and spot lights. But in case of directional lights, there will be no difference in diffuse component using per-vertex normals, whether the component is calculated per-vertex or per-fragment.

Extremely sorry for mistakenly posting the first incomplete post.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.