Scaling Window/Viewport Causes Severe Slowdown

I have a weird problem that has been occurring for quite a while and haven’t found any answers. When the window/viewport is scaled up in size, the application suddenly slows down almost to a complete standstill. The CPU usage goes up to 90% or more and it remains that way until you scale back down to the original size or smaller.

We are using the 9755 Linux driver (as well as a few others), a GeForce FX5200 card, and Kernel 2.6.

Has anyone seen this issue before?

What is the “9775 linux driver” ? Is it a mesa driver version?
I don’t really have the answer to your question, but it looks like, opengl falls back to software mode when you resize the window…Do you use framebuffer objects? maybe you are trying to render to NPOT textures and the current driver does not support it.

Sorry, I meant the 1.0-9755 driver provided by nVidia. And, no we are not doing any framebuffer objects. And the software we are running uses the nVidia texture rectangle extension for NPOT textures. Note, that this problem doesn’t exist on our systems that have newer PCIe nVidia cards installed on them with newer drivers.

If this is a driver issue then I’m surprised that no one else has complained about it. I can’t find any info anywhere on this happening.

Geforce FX5200 is getting old, and texture rectangle is rather confidential.
Is it better with regular texture GL_TEXTURE_2D with NPOT dimensions ? I don’t know if your video card supports it, but it might be worth a try.

GeForce 5200FX doesn’t support NPOT textures without the extension so that’s a no go.

Maybe the slowdown happens because the window is only refreshed when this one is resized. If the window was constantly redrawn, I am pretty sure that the program would be constantly slow.

No, the window is re-drawn at a rate of 30 fps before and after scaling. It works fine before scaling and slow/CPU hog after scaling. Once re-scaled back to the original size, it continues as normal again.

Your card just doesn’t have enough graphics memory.

Your textures, buffers, etc fit into memory when the size of the window and thus the buffers is small.

With a bigger window and thus bigger buffers something has to be placed in system memory instead, slowing down your application.


Highly doubtful. The original problem is on a card with 128MB graphics RAM and the problem does not go away when I used a 256MB graphics card.

How big does the window have to be for this to happen? Most cards have limits of the window size that they can render too. If the window is too large, the driver has to go to some heroic efforts to support it. The method that we’re working on for the open-source drivers is to render to multiple off-screen buffers, then copy the individual sub-windows to the real window. This requires multiple render calls (one for each sub-window) and extra data copies. If the Nvidia driver is doing something like this, it would explain the slow down.

However, I think the Geforce5 cards all have native support for at least 2048x2048 windows.

Not large at all. It happens on an 800 x 600 window that we drag just slightly larger.