performance problems

Hi all, GLSL gurus!

I have a little problem with my GLSL shaders. They seem to run fine on a wide range of gfx boards, unfortunately I’m experiencing strange framerate drops; on some boards (not latest ones, I’m speaking x1600, as an example), simply turning the shaders on drops the framerate to 1-2 fps, even if they are always correctly compiled (no errors, no warnings) and the GL is reporting they will run in hardware. There are no visible artifacts, only, framerate drops to unacceptable values.
Is it normal? Is it a common pitfall? Or is a problem with GL drivers (and GLSL real-time compilers)? If I write a program that uses GLSL, should I expect it to work really randomly (perfomance-wise) on the various hardware out there, or is me that is making something wrong in the code (in other words: how much reliable is GLSL, as a shader technology)?

thanx in advance for your time!

Are you sure there are no software fallbacks? e.g. using 16 bit floating point filtering on a GF6200 is in software. On a GF6800 however, it’s in hardware. Then again 32bit floating point filtering on a GF6800 is in software while it’s in hardware for GF7xxx and so on…

N.

First of all, thank you for answering me!

Well, the “fun” part of the story is the fact that, on one of the boards I’m having problems, if I print the compilation log it reports the string “GL program compiled/linked ok. It will run in hardware”. So I’d assume that it’s not a software fallback…
However, I’m not using “strange” formats, no fp at all (standard RGB 24bit texture, standard framebuffer - no FBO - for output and so on), I’m simply using one shadow texture grabbed directly from the framebuffer (and not, again, from a FBO) so I don’t think it could be a software fallback. I can post a couple of my shaders here, just to show they are really basic and not using very “advanced” features…
Is there a way to know whether a GLSL shader is running in software or not?

Oh, well, btw, just another question: it would be a good idea to rewrite my shaders using the low-level interface instead of GLSL (GL_vertex_program etc.)? Will such shaders be more compatible, from a performance point of view (no more real-time hi-level compiling required, only assembler). Or it is the “past”, probably no more supported in the future, and I should stick with the hi-level GLSL interface?

When you say ‘one of the boards’, is that particular board some kind of low-end model?

Can you pinpoint the exact problem e.g. see if it still exhibits the performance drop if you disable shadow mapping etc.

You can always rewrite your shaders in assembly, but I wouldn’t recommend it. It will allow you to perform some optimizations manually if you think they’re hard to spot for a shader compiler. Drawback is that you have to rewrite it for next gen hardware which may support a larger ALU instruction set.

N.

Yes, it is, I don’t remember now exactly the specs but it’s an ATi radeon low-end model…

Can you pinpoint the exact problem e.g. see if it still exhibits the performance drop if you disable shadow mapping etc.

It seems that the very simple fact of turning shaders on is dropping framerate, even if I write a stupid program with a vshader that simply ftransform() and a fshader that throws a constant output color

You can always rewrite your shaders in assembly, but I wouldn’t recommend it. It will allow you to perform some optimizations manually if you think they’re hard to spot for a shader compiler. Drawback is that you have to rewrite it for next gen hardware which may support a larger ALU instruction set.

Because next gen hw will have an incompatible instruction set? If simply a larger instruction set will be available I don’t see a problem here, old instructions will still be there and old shaders will still work…
The real question is whether this low-level approach will still be available in the future, or if the GL_vertex_program / GL_fragment_program will be removed in the next versions of OGL, given the fact that now there is a better (GLSL) approach…

True, they will work. I was actually referring to the performance aspect. The new hardware won’t be able to make use of the additional instructions. Hypothetically, suppose you have written a shader for some old architecture with MUL and ADD instructions and the new generation has an additional MAD instruction. Recompiling the high level program would make use of the mad instruction while the assembly would still use the MUL and ADD separately…

Actually, the compiler translates your high level code into assembly so it’ll always be supported. Cg allows you to choose the profile you want it to be compiled e.g. arbfp1 compiles for architecture swith ARB_FRAGMENT_PROGRAM support while fp40 compiles for NV_FRAGMENT_PROGRAM2 etc.

N.

What formats do you use for vertex attributes?
Do you enable stippling, wide lines (glLineWidth with value greater than 1) or line smoothing (GL_LINE_SMOOTH)?

Also check alpha test, depth bias, wide points and glPolygonMode().

This will not catch software fallbacks that may happen due to interaction with your shader and various OpenGL states, such as the ones mentioned above.

First of all, thank you all for your kind answers!

Second, problems have gone! The latest drivers from ATi completely solved the performance issues! Without changing a single line of my code, now everything works flawlessly on the ATi boards I was experiencing problems with!

Definitely, a drivers problem! At this point, I’ll stick with my shaders and GLSL!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.