I wrote an OpenGL application that draws an image (as a texture) on top of a square, and a shader in GLSL to perform some custom lighting operations on that image (mainly for diffuse/specular computation), so you can see good specular highlights depending on how you move the image relatively to the defined lights. It does work well on my nVidia 6600 (Win32), but I though I’d try as well on my laptop’s GPU, which is a Radeon 9200 (OS X).
So I just compiled the code on the laptop and gave it a try, and to my surprise, not only did it compile without any mistakes, but it also did not crash when I started it. However, the result was not exactly what I expected. None of the complex lighting computation were shown in the laptop demo, it actually only did a basic texture mapping, whereby the image was mapped to the square. But there was absolutely no specular highlights present in the demo, as if the fragment shader was altogether ignored. I continued my test, and wrote the simplest fragment shader, i.e. “gl_FragColor.x = 0.5;” and similarly for y and z. But still nothing, the square was still displaying the image rather than being a uniform gray. It thus appears that the fragment shader is completely ignored, and the fixed functionnality is used instead.
I did not really expect my program to work with my laptop’s video card to be honest, but since it compiled successfully (both the CPU and GPU compiles generated no errors or warnings), I was expecting a different result. At least obtainining a warning of some sort to let me know that it will probably fail. By the way, the “GL_ARB_shading_language_100” and “GL_ARB_shader_objects” extensions are not supported by my laptop GPU. Is that a normal behaviour? Or do I have some other problems?