For a start, you need to call glEnable(GL_LIGHTING). That results in vertex colours being calculated from the light and material settings. Otherwise, the vertex colour is whatever is set by glColor (or glColorPointer).
If you enable lighting, you need to supply vertex normals with glNormal (or glNormalPointer). These are required to calculate both the diffuse and specular components, which are normally the dominant components in the lighting model (the ambient and emissive components aren’t affected by normals).
If you enable lighting, glColor (or glColorPointer) is ignored unless you also enable GL_COLOR_MATERIAL, in which case the material properties are updated on a per-vertex bases according to glColorMaterial.
Yes, I enabled light0 and light1. I have an object at 0,0,0 and my camera is at 0,0,1. I’m finding I can’t move light0 and the light from light1 is ignored.
OpenGL also seems very sensitive to the order in which I do things. There seems to be arcane unwritten rules about that.
In many cases, yes. OpenGL is a “state machine”, meaning that most OpenGL calls effectively set global (GL context-specific) state. Drawing commands use the currently set state to actually draw stuff. So the order matters: you set the state, then you draw, then you set new state, then you draw with that, etc.
Furthermore, you’re using the outdated OpenGL fixed-function pipeline, which exhibits this problem way more than modern OpenGL. Fixed-function OpenGL is extremely opaque about what it’s actually doing until you sit down and actually learn all of the myriad of rules and options available to you.
Shader-based OpenGL is a bit friendlier in this regard, as it forces you to program everything yourself. Shaders give you the tools to create the “rules and options”, so nothing is hidden.
Overall, while we can offer tips, without seeing the actual code we can’t offer anything more than that.
And even some state-setting commands depend upon the current state. For example, the light position passed to glLight(GL_POSITION) is transformed by the current model-view matrix at the time of the call, and the resulting eye-space position is stored and used for rendering.
All of this is documented, but you have to actually read it.
Initially, I’d suggest starting with the red book examples and modifying them.
OK that clears it up, thanks. Is there a state diagram anywhere that I can look at?
An example of something that puzzled me: One code sample was using 1 normal per triangle.
Another was using 3, where each normal was specified after the vertex. Another had 3
but the normal was given first. It seems only the last code sample was correct.