I am working on a win32 application which is using openGL as alternative to GDI to serve as 2D rendering engine. It can have multiple windows at a time, and hence multiple rendering contexts. Each rendering context uses at least 3 buffers(back, front, depth) and quickly Vmem can be exhausted on 32/64 MB card on 1240x1028 resolution. When VMem are all used up, the openGL performance becomes very poor and unpredictable. Alternatively, If all my rendering contexts are software-based (PFD_DRAW_TO_BITMAP), then I would unlikely run out of system memory and my openGL performance is slower than hardware-accelerated, but consistent and predictable at least.
So I come up an idea of mixing software-based GL contexts with hardware-based GL contexts. For a focused window, I would switch my GL context to hardware-based and for non-focus window, I would switch to software-based GL context. Doing so, I hope it would conserve VMem while at the same time offer good openGL performance.
However, in my experiment, I noticed I still run out of VMem even though there is only one hardware-based GL context present at any given time.
I usually start with creating software-based GL context first for each window. When a window (MDI window) becomes activated(focused), I switch GL context by deleting old software-based DC/RC and creating new hardware-based DC/RC.
I am puzzled that this approach didn’t help. I still run out of VMem at the same time as if I create all hardware-based rendering contexts.
Any idea? I don’t think it’s memory leak, or driver dependent issue.