OpenGL Lighting Model Basic Question

If I:

  • set up a scene with a fixed directional light shining on a sphere, and
  • I rotate the eye point around that sphere (so the light is moving around relative to the eye), and
  • I have LIGHT_MODEL_LOCAL_VIEWER set to FALSE

would you expect the specular highlight to move around on the sphere? I certainly would, because I would expect that specular lighting would be using blinn halfway vectors dotted against vertex normals to compute specular highlight locations. But what I’m getting is the highlight is glued to a particular place on the sphere. As though it is not using a blinn halfway vector, but rather is just dotting against the light direction.

Does it sound like:

  1. I’m overestimating the capabilities of OpenGL lighting, or
  2. I set something up wrong?

The light is transformed through the current modelview matrix to eyespace and stored there.

To move the light relative to the eye in any way you need to respecify it’s position with a new matrix on the modelview stack or with a new position.

Judging from the diffuse lighting, I would say the light is pointing the right direction. That is, as I rotate the eye around the sphere, the bright spot stays put on the sphere. But the specular highlight is right dead center in the the bright spot as well, no matter how I rotate things, which isn’t what I would expect.

Perhaps I can rephrase this question to be more direct:

When computing specular highlights, does OpenGL dot the vertex normals against the light direction or against a vector halfway between the eye & light directions?

It should use Blinn’s halfway vector (half way between view vector and light source) and dot product with the normal.

With localviewer true the view vector is computed per vertex.

Hmm, localviewer false will use the eyespace z axis as the view vector for all vertices.

Try setting localviewer to true.

My specular highlights disappear with localviewer set to true. This is a bit of a mystery to me. From reading the docs, it looks like with localviewer=false, the to-eye vector is just +Z, whereas with localviewer=true, the to-eye vector is |eyePos - vertexPos|, which would be darn close to +Z in eye space, and should be exactly +Z for a vertex in the middle of the projection plane.

So I wouldn’t expect the localviewer setting to make much of any difference with a small sphere test case like mine. And I’m quite mystified why localviewer=true would make the highlight disappear altogether.

Hmm… are you loading your viewing matrix on the projection matrix stack? Or doing projection on the modelview?

Distinguishing these correctly is critical to certain fixed function features, like specular which requires the view vector to be computed correctly.

You should be doing projection on the projection matrix and doing viewing on the modelview (or software)

Look at the glMatrixMode call.

Originally posted by dorbie:
Hmm… are you loading your viewing matrix on the projection matrix stack?
Wow. That’s scary. You’ve been doing this a long time, haven’t you. Where do I send the beer?

You nailed it. I was setting my camera matrix in the context of the GL_PROJECTION matrix. Seemed reasonable – I think of camera positioning as part of projection. But moving it down two lines to be in the GL_MODELVIEW made specular highlights work right.

:slight_smile: <paranoia>… maybe I can see your code</paranoia>

Cool, so you’re now doing viewing transformation in hardware too. That means you’re using a lot of the fixed function pipeline for hardware acceleration.

Now all you have to do is multiply model matrices with the modelview stack and you’ll be fully hardware accelerated.

Is that really done by the GPU? We’ll be deploying on a high-end game card (something from ATI or nVidia in the $400 range). Has matrix math really be offloaded from the CPU to those cards? (Last I knew, which was years back, gamer graphics cards didn’t accelerate much of the matrix math.)

The question is academic, though. There’s no way I’d tear up our software pipeline to get a nominal increase like moving those tranforms off the CPU would get me. Our target application will be showing models with modest polygon counts (say, 20,000), but tons of texture. A lot like the models you see on our site: http://www.kaon.com/services3D.html

For what it’s worth, making the changes you suggested had no impact at all on performance. But it did solve my perspective correction issues. (Yippee!)

Next step is implementing “pick” functionality, which looks like such a mess in OpenGL, I might just do it in software.

Then I’ll start optimizing, although I suspect the application is going to be mostly fill-rate bound, because our polygon counts are so low.

It’s almost always done on the GPU, not necessarily the matrix mults but certainly the vertex transformation, moreover it’s the same cost as the viewing transformation you’re already doing since it’s a concatenated matrix.

The performance gained will depend on the content, and the platform, intel on chip graphics for example still transforms in software, but almost all graphics cards made for the past 5 years transform and light on the GPU.

Your claim is just entirely content related, but it’s also spurious until you move model matrix transform to the GPU before comparrison. You seem to have some middleware, the observation that your demos are fill limited and so you shouldn’t care about geometry performance is a cop out.

As for tearing up a perfectly good software pipeline… sigh, how much work is pushmatrix, multmatrix, popmatrix for each transformation node? Nobody is suggesting you tear anything up, just eliminate redundant software transformation when you’re already doing the matrix multiplication in hardware… and that’s exactly what you’re already doing with the viewing transformation.

You seem to look at these things and shy away from the move to hardware, why should a matrix mult for transform nodes significantly complicate your software or involve tearing anything up?

If you can handle the geometry well content developers will take advantage of it and you will see complex geometry, saying you are geometrically simple is a self fulfilling prophecy in this case.

Further, moving work to the GPU frees up the CPU for processing, threading or other system activities.

Originally posted by dorbie:
Your claim is just entirely content related, but it’s also spurious until you move model matrix transform to the GPU before comparrison. You seem to have some middleware, the observation that your demos are fill limited and so you shouldn’t care about geometry performance is a cop out.
It’s a pretty well established principle that you shouldn’t optimize until everything is working and you have evidence of where your bottlenecks are. I’m not completely ruling out moving bits of the transform hierarchy to the GL pipeline, I’m just extremely skeptical that when I get some stats, we’ll find that we’re spending more than a few milliseconds doing geometry transforms.

Like a game engine, we focus on getting the best performance possible with a known kind of content – we’re not trying to make a great general-purpose engine, but rather an insanely-great special-purpose engine. So yes, we know exactly what kind of content will go into this pipeline, and one of the constraints is that the SAME content will be deployed on the web, which has very well understood limitations on polygon count.

Originally posted by dorbie:
As for tearing up a perfectly good software pipeline… sigh, how much work is pushmatrix, multmatrix, popmatrix for each transformation node? Nobody is suggesting you tear anything up, just eliminate redundant software transformation when you’re already doing the matrix multiplication in hardware… and that’s exactly what you’re already doing with the viewing transformation.
The issue has to do with software architecture and layering. I am working to avoid sticking GL stuff all over an existing, highly optimized pipeline. There are a few key places where it is natural to shunt an exisitng data stream out of our pipeline and over to GL. Transforming geometry isn’t one of them. (Note that 90% of the “animation” done in the product visualization space we serve involves simply moving the eye point around, so the geometry transformation you’re so concerned about hardly ever runs anyway.)

Also, from what I’ve read, the MODELVIEW matrix stack is generally of fixed depth. In the general purpose case, that must mean that you have to implement this stuff in both places anyway, in case you run out of room in the stack.

I’ve never seen an app legitimately overflow the modelview matrix stack, usually it’s a missing popmatrix call, it’s guaranteed to be at least 32 deep. Worst case you could manually multiply the last matrix, still no need for software xform, just software matrix mult.

You’re already doing the viewing matrix mult, doing the model on there is free. This is not a case of premature optimization, it’s about using a resource you’re already using in the correct and standard way for ‘free’ with very little effort. Anyhoo, I’m done trying to persuade you to do the right thing.

Read the RedBook. I met this problem before too. Light Position will be transformed in ModelViewMatrix. so you should

draw()
<-------->
glLight()
<--------->

Read the thread, that was not his problem. In anycase your post is incorrect.

Your light position would lag one frame behind at best.

You need to place the viewing matrix on the modelview stack and then position world space lights then draw the scene.

For lights under model matrix transformation you need to apply those transformations to the modelview matrix before positioning the object space lights. Then draw the scene.