Multiple Window Weirdness

I have been working on an application that also uses a few GL-windows, pbuffers lots of textures, and what I have found is that the
driver will fall back to what appears to be software rendering when vram fills up.

At least this is my best guess. It happens per
window basis for me, and I notice it very
well, because it gets really a lot slower.

The reason why I believe it falls back to software is that I always use __GL_FSAA_MODE
to get nicely antialiased rendering, but
whenever these windows get really slow, they
also lose the antialiasing.

Anyone else noticed the same?
This always seems to happen with dynamically
allocated windows.

  • Torgeir :slight_smile:

Originally posted by tsewell:
[b]FYI:

I was able to get single-buffering to work (not creating the window correctly). It helps a lot, but seems to point to the same ‘I’m filling up vram’ theory. In single-buffer mode I can open just over twice the number of windows I could with double-buffering before render time jumps drastically. This combined with the use of smaller gl windows (just the size of the borders) is sufficient to draw many windows on the screen without a problem (over 100 by my estimates). Thanks for all your help zen.

weston[/b]

Interesting but I was wondering why VRAM fills up.I’m not sure whether the GL puts front and back buffers in VRAM but that seems to be the case.Therefore small border-windows should do the trick.Just be sure to share the textures among contexts.Also regarding drop-shadows,can’t you just render them on the root window?I mean you wouldn’t want them on the other windows anyway(although that would be more realistic it might piss a few people off).

Torgeir:

I think you are experiencing the same thing I am. When I have too many windows it indeed appears that software is rendering my last window (and only my last window, not every window that needs repainting). It is obvious from watching a processor useage monitor, that my CPU is working very hard when this window needs repainting. I’m using a Geforce2 with the 1.0-3123 version of the NVidia drivers, how about you?

zen:

Yes, I’m actually just using one context so sharing is implicit (seems to work very well and I don’t have to pass around a context for each app window). I’m sure some users will be very pissed off, but they can disable it easily, I’m actually going to implement it simply allowing alpha-blending for the borders. So shadows will just be a side effect, very easy to implement (assuming I can get the xcopyarea issue resolved) and very easy to turn of - people can just use a different theme or edit the default one. I didn’t intend to use giant shadows, just 10 pixels or so to give a bit of the floating effect (a la OSX).

Hi,

Torgeir and tseweel, I’m experiencing the same problems as you. I looked into this a while ago and it seems as if the slowdown (switch to software rendering) happens when the NVIDIA driver fails to allocate a rendering surface in video memory.

In my setup, I had a dynamic number of processes (each with a number of OpenGL windows) and at some point, OpenGL was no longer able to hardware accelerate new processes.

I know that I was oversubscribing video memory, but I was hoping that the driver would page out textures from previously started processes instead of falling back to software rendering.

The same problem can also be triggered by a much simpler setup:

  1. Open a single 640x480 OpenGL window
  2. Load textures that takes up as much space as you have video memory, i.e. if you have 64MB of video memory load 64MB of textures

At this point everything is fine, although you may be transfering as much a 64MB of texture data (LRU worst case) over the AGP bus each time you render a single frame.

  1. Resize the window to say 1024x768.

Now the rendering surface has to be reallocated - but the allocation fails as there is no video memory available. As a result the driver falls back to software rendering.

You can aviod this scenario if you have a Quadro board by enabling “Unified Back Buffers”. You might also get around the problem by using an older driver - as far as I recall the allocation of rendering surfaces was slightly different in the 1.0-15.41 driver…

I would be interested in getting a resonable explanation from NVIDIA for choosing to fall back to software rendering.

Good luck,

Niels

Great, thanks a lot for all the info.

Hi,

I just tried an example that generated the “fallback to software rendering” bug with the new 1.0-4191 drivers… The problem is still there with the new drivers.

– Niels

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.