this is my first look at an OpenGL fun project. So far I have done pretty much the same thing on an Android device and now ported it to Windows .NET.
Here is a link to my Visual Studio project:
https://www.dropbox.com/s/wndp42nb9o37465/KarGL.zip?dl=0 (about 5MB)
The problem is: coloring and texturing are working, and in some strange way the lighting (one single point light) works, too.
But: the light and the way it affects the objects’ surfaces kind of changes when I change the camera viewpoint. It is hard to explain, maybe some of you can take a look at the zipped project file.
Basically I am doing this:
Every object (only cubes so far) has the same vertex coordinates and then gets translated to its final position in the world. I have to cubes (initialized in World.cs) that are both centered around the XY-axis. The camera looks at (0,0,0). So both cubes (one big, one small) are visible and in the center of the screen (for example, the cube is 2f wide, high and deep and goes from x=-1 to x=1 and so on…
Now the light object is somewhere at (0,3,0), so it SHOULD be slightly above the world’s identity and directly above the small cube. But somehow the light stays at the lower left corner of the bigger cube, far away from (0,3,0). I’ve been trying to fix this for days now - without any success.
Do you know what I am doing wrong?
I think I solved your problem, but it’s a combination of quite a few different small problems. I changed the display a little bit to remove the textures and to get a better view of the light angle, so it now essentially looks like this:
I hope this is what you want.
I changed quite a lot, but I hope I can summarise the problems with your code here:
[li]The light position in the shader should be in world space, NOT in eye space[/li][li]The position you transmit to the fragment shader should be in world or model space, NOT in eye space. (vPosition = vec3(uM instead of uMV * …). Reason being that you want to compute the vectors and angles between them, and they would change constantly when you move your camera whereas in reality, the light cone below a lantern doesn’t change when you move your head. There is one occurrence where you do need this knowledge (for reflection), but in a different and much easier way than you described here (see Phong shader).[/li][li]The normal you transmit to the fragment shader should again be in world (or model) space, NOT in eye space.[/li][li]Annotation: You should not transform vectors with a 4x4 matrix and just discard the w component (you do this by casting them to vec3). The w component is important to normalise their lengths (gl_Position does this by default).[/li][/ul]
Maybe there is a way to accomplish what you were trying by transforming everything into eye space and I just don’t know about it, but until now, I always used world (or at most model) space and it always worked out fine. See if you can reach your goal with the above comments, if you can’t, I’ll upload the code again
Edit: It’s funny if you have to open a VS15 project with VS10. It constantly mourns the impossibility to open it because it’s from a newer version, but after you delete or comment out some lines in the .sln file, it opens no problem…
Oh my god that did the trick! Brokenmind you are a genius!
That was so very stupid of me. Why would the eye space have anything to do with a fixed light position in my world…!!!
I also got rid of discarding the w component - like you told me.
So if I understand correctly: if I want specular highlights, I would also need to pass the modelview matrix as a uniform to the shader, because for these highlights the viewing angle is important?
And just one last question if you do not mind:
When reading through forums and tutorials for OpenGL I always read commands like glPushMatrix and glPopMatrix and so on… Do I ever need these commands? Aren’t they just deprecated?
All this fuss about world space, view space and object space and model matrix and view matrix and projection matrix and modelview matrix and modelviewprojection matrix … for beginners, it’s all very confusing and you basically NEED someone to push you in the right direction. No reason to be ashamed of
Question 1: Yes and no. See, for diffuse lighting, the important factors are the light vector and the surface normal, because this defines how bright the surface is. If you want specular lighting, you need a third parameter, namely the camera position, because the more the reflection of the light direction ON the surface hits the camera, the brighter it gets. For that, you then need the normal matrix. If you need someone to go into detail about it, maybe we should chat somewhere else
Edit: Normal matrix is derived from the view matrix
Question 2: For OpenGL in particular, learning the basics is quite difficult, simply because the “basics” are 30 years old and deprecated in sooo many ways. Pushing and popping matrices on the stack is rarely used nowadays because you usually have classes managing those matrices (like your GLwindow). And because you transfer them to the shader, it has become unnecessary to push the matrices to the stack. BUT: They will never be removed because all the 30 year old code still has to run … somewhere. Also, when you go to places where there are no shaders (VMs for example, OpenGL 1.0 there), you actually need them.
So if I understand correctly: if I want specular highlights, I would also need to pass the modelview matrix as a uniform to the shader, because for these highlights the viewing angle is important?[/QUOTE]
The specular reflection coefficient depends upon three vectors: light, eye, normal. The diffuse reflection coefficient only needs two: light and normal (the eye position doesn’t matter).
It doesn’t matter which coordinate space you use, so long as it’s affine to world space and you use the same space for all of the vectors. So object space, world space and eye space will all work. Clip space (after application of the projection matrix) won’t work if the projection matrix contains a perspective projection. Eye space has the advantage that the eye position is always at (0,0,0,1); the fixed-function lighting (glLight(), glMaterial(), etc) uses eye space.
When reading through forums and tutorials for OpenGL I always read commands like glPushMatrix and glPopMatrix and so on… Do I ever need these commands? Aren’t they just deprecated?[/QUOTE]
They are deprecated. You don’t need to use them. They can be useful for modularity in conjunction with display lists, but that style of programming is effectively obsolete at this point.
For modern code, generate all matrices yourself and either pass them to shaders or (if using the fixed function pipeline) use glLoadMatrix(). For any non-trivial program, you;ll probably want to have them on the client anyhow (e.g. for picking, collisions, etc), and reading the fixed-function matrices back from OpenGL via glGet() is orders of magnitude slower than just generating them client-side.