Multiple Window Weirdness

Okay. So I’m writing a window manager that draws window borders using opengl. I’ve pretty much got it up and running but I’m having an odd problem. I’m using textures to decorate the windows and all the windows use the same context created when the window manager starts up. Everything looks great until I open too many windows when things slow down quite a bit (ie resizing/moving windows starts to tax the CPU).

The problem seems to be tied to how large the windows are. For instance on the 11th xterm window of about 1/5th screen size, I see problems. But if I open 10 xterm windows and resize the last one to be larger the problem shows itself.

Does anyone have any suggestions? I’m not using any of the context sharing/copying calls but it dosen’t seem necessary since I’m just using the same context for all windows. I’m open to suggestions about a different approach to my design as well. I’m really trying to speed up decoration rendering and allow for alpha blending of border edges but I’m not sure how to implement the latter portion just yet.

TIA
weston

Also forgot to mention that if I don’t actually use the textures but just use a glColor4ub to draw the border, then I don’t see the same issue.

I suppose you’re not fill limited so the only thing I can think of is that you somehow (say by uploading a different texture for every different xterm) manage to fill the vram of your card and therefore get slower texturing.But I assume this is the first thing you thought of too…

May I see your code? May I? Please, please, please?

I’d love to have a GL rendered window manager!

Sure. I’m probably going to put it up on sourceforge pretty soon. Right now it’s in really bad shape. Meaning that it dosen’t work except to the extent that it lets me pop up xterms to test border rendering. I could sure use some help to. This is the first big opengl project I’ve done as well as the first xlib application. I’ll be sure to post here when I put it up on sourceforge. Glad you’re interested

weston

Well, my first theory was incorrect. I am now using glDrawPixels and I get the same performance decrease when the number/size of the windows gets too large. Not to mention the fact that rendering is a bit slower using DrawPixels compared to using textures.

weston

I should be loading the textures only once each. This is what I was afraid of, but how can I monitor the vram on my card during execution? I thought I was doing everything correctly (ie using the same context for all windows) but perhaps not.

At this point I probably hold the record for most replies to my own discussion post. But maybe someone else out there is as stupid as I am, and will find this informative.

I’ve discovered what’s wrong. I’m dumb. I wrote a test app that just displays N windows of size WxH and does nothing but paint a rectangle of solid color into each window’s entirety. Same problem exists. And when I turn the dial up to windows of size 800x800, I can only paint 3 before paints become orders of magnitude slower.

And the obvious answer is that the video memory is being filled up, and so the context for each window has to be cached out to system memory while another is being painted. I was stupidly trying paint borders on application windows (my app is a window manager) by building a parent window as big as the app window but hidden mostly by the app. What I wasn’t realizing was that the video hardware was reserving memory for all that unrendered area. Anywho, I feel stupid for bringing this whole debacle to the forum, but I figured I could at least document the answer in case anyone is as daft as I am.

So you were creating miltiple contexts.I was under the impression you used only one.I believe you can get the handle of the root window.You can then get a single context load an orthographic projection and everything should work ok(besides beeing simpler and cleaner).

btw regarding hw accelerated wms.Enlightenment 17 is going to use ogl too but there’s still no stable release

Originally posted by zen:
btw regarding hw accelerated wms.Enlightenment 17 is going to use ogl too but there’s still no stable release

And as I know enlightenment there won’t be any in the next time.(Although I hope they accomplish their task soon, it sounds just to pretty).

Yeah, I am actually loading a single context using the root window. But I still get thrashing when the number of windows gets too high.

Actually I’ve looked at E17 optimistically, but the web page seemed to imply that it may never be release in my lifetime. Plus, I’ve never been a huge fan of enlightenment, it seems really bloated and I have a pretty slow machine.

Thanks for the help.

Enlightenment isn’t bloated…it’s just graphically overdeveloped.Seriously though you can’t call it bloated.It’s as simple(well more or less)as it gets.Just a graphical shell.There are some cycle-eating features but you can turn’em off.I take it you haven’t tried KDE3 then?
On to the point though.I would be very surprised if you could actually manage to fill up VRAM with windows decorations.I take it that all windows have the same decorations so the number of windows shouldn’t be important.This looks more like a filling problem.How exactly do you do the drawing and when?Redraing the whole screen all of the time or even when something gets moved might be slow especially with big resolutions.

Perhaps it was the graphical stuff that was taxing my poor old (even slower) CPU way back when. I have not yet tried KDE3 but I heard it’s pretty slick. There are a couple of things I want to implement, though, that I’m not aware of any of them doing. Plus I’m learning a lot about xlib and opengl

No, I actually wrote a test app, that grabbed a context from the root, then started opening windows. And around the third/fourth nearly full screen window, the drawing time went from thousands on the order of 10^3 usecs to seconds. Meaning the first few windows popped up seemingly instantly, but the last took 2-3 seconds to paint. I guess the best way to describe how I’m painting is to show you:

void PaintWindow(Window win, int width, int height)
{
glXMakeCurrent(display, win, cx);

/* Set the Viewport size and use Ortho Mode */
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, 0.0f, 3.0f);
glMatrixMode(GL_MODELVIEW);

/* Cull that back face */
glEnable( GL_CULL_FACE );
glCullFace( GL_BACK );

/* Jump to the Origin and translate just a bit */
glLoadIdentity();
glTranslated( 0, 0, -2);

/***********************************/
/
Start the drawing already /
/
***********************************/
glBegin(GL_QUADS);

glColor4f(0.75f, 0.0f, 0.75f, 0.0f);

/* Pretend we’re just painting top/bottom borders */
glVertex3i(0, 0, 2); glVertex3i(width, 0, 2); glVertex3i(width, height, 2); glVertex3i(0, height, 2);

glEnd();

glXSwapBuffers(display,win);
return;
}

This function is called by my ‘Expose’ handler and I am timing it by calling gettimeofday() just before and just after the call to PaintWindow(). The cx used was created from the root. And yes I get the same problem if I just draw the rectangle to a widthx40 region of the window.

I knew that I wasn’t filling up video memory with just the decorations, but dosen’t using an opengl window thats a certain size reserve a certain amount of memory, regardless of what part is rendered to?

I don’t see how it could be a filling problem, my card supports 150Mpixels/sec (I think) and if I did the math right, all the drawing I’m doing should take less than a second. Plus, shouldn’t I see a linear increase in render time of all windows, instead of the last one in a series suddenly taking orders of magnitude longer? Plus I see a CPU hit, which shouldn’t happen if I’m just waiting around for fill right?

I had decided based on my assumptions that I need to only be drawing (at least) four windows (one for each border). I am pretty sure at this point that the textures (in my real application) are staying resident and are used by all windows.

thanks for your help.

This doens’t make sense…So the last few calls of the same function take longer huh?Ok try to find what part actually takes longer.Specifically time tha glxMakeCurrent and glxSwapBuffers call.My guess is that one of these calls takes longer each time.My guess is also that it will be swapbuffers probably because in windowed mode swapbuffers can’t just exchange two pointers.It has to copy the backbuffer to the front buffer since the window covers only portion of the frontbuffer whereas in fullscreen the window is the front buffer.I can’t explain why the problem appears after a while though.Can it be that you’re filling up VRAM somehow?I don’t have any experience with multiple contexts.
In general though you don’t need double buffering since you don’t do any animations etc. I believe.Just do single buffering and draw to the frontbuffer immediately.This will eliminate the swapbuffers overhead,whatever that is.
Also make sure you request a suitable visual.For example you don’t need a zbuffer or a stencil buffer,no need to waste memory.Although I can’t remember if you can request to not get any…
Btw what’s the need for the translation?Just adjust the quad’s coordinates.

Disclaimer
It is now almost 5 AM here so some or all of what I’ve just typed might not make much or any sense.

Way ahead of you, well probably not. But anyway, it’s the last glVertex() call that is the source of the slowdown. Here’s a dump of my log:

Painted window: 37748739 in 0 secs 1 usecs
Painted window: 37748740 in 0 secs 2 usecs
Painted window: 37748741 in 0 secs 3 usecs
Painted window: 37748742 in 0 secs 3 usecs
Painted window: 37748743 in 0 secs 3 usecs
Painted window: 37748744 in 2 secs 4294832980 usecs

The usecs field dosen’t quite make sense here that would be 4294.83 seconds. I may be using the wrong modifier in printf. But I can assure you things go smoothly until the last window which is blank for ~2 seconds.

I didn’t think it made any sense either, that’s why I posted the topic. But regardless of the context situation, dosen’t the card have to keep the buffer for all drawables in vram, even if it dosen’t render anything to part of them? If I’m opening 6 windows at 800x800 that’s ~3.8 Mpixels. I have a 32MB card so this would allow for ~8 bits of color info for all three color components. Am I making all this up, or does any of it make sense?

I agree that double-buffering isn’t necessary. But I couldn’t get it to work. One of the wrinkles I need to iron out (is it possible that my card only supports double-buffering?). I don’t think this is the problem though, since the slow-down is in the glVertex().

Yeah I don’t think you can ask for no zbuffer or stencil buffer. At least there don’t appear to be any attributes for them in the glxChooseVisual() ref.

No reason for the translation at all actually. Just accidentally left it in when I was tryin’ something. Thanks for pointing it out

weston

You know I could get around all of this, by just drawing all borders to a fullscreen toplevel window. But I can’t think of a good way to have the app windows show up. I’m actually struggling with this because I want to do drop shadows behind windows. The only way I can come up with is to use XCopyArea and then paint the buffer into the opengl window. And of course, this just wouldn’t work (from a speed perspective) to do for all app windows on the screen, especially video applications. If you have any ideas on this subject, then the odd multi-window problem is a moot point.

First of all regarding the single buffering which might be the cause of your problems.Try using single buffering but issue a glFlush() command instead of the glxSwapBuffers() one.That might help.
Also I have the feeling that you’re doing this the wrong way.For example when I create an opengl window using enlightenment I can see everything I draw in the window.That means that no portion of the window is covered by decorations so I assume decorations are painted on the root window.So try getting a context for the root only and draw all decoration around the windows but on the root window as textured quads for example.You can do drop shadows as well.Just use single buffering(get it to work first) and update the root window when something moves.

[edit]:I forgot:try looking at the sources of other(better small) windowmanagers to see how they do things.The only difference should be GL vs X rendering anyway.

[This message has been edited by zen (edited 12-14-2002).]

Okay, I’ll try glFlush() and see what happens.

Yeah, I have looked at other window managers like FVWM (sorry I forgot to explain this part). Basically when an application asks to be shown, we create a new window slightly larger in size than the app window (big enough for whatever the border sizes are all the way around). Then we reparent the app window with the new, slightly larger window so the app window is locked in place (with respect to some corner of the larger window), such that moves are very easy - we just move the larger window. But this also means that for an 500x600 application window I have an opengl window slightly larger than this behind it, and the only parts that you can see and that are rendered to are the exposed portions all around the edges.

But if I draw on the root, I have the problem of borders for the topmost of two overlapping windows not being painted over the bottom window (because they’re painted on the root which is below all app windows). This would actually work great and make drop-shadows uber-easy if no windows ever overlapped, but alas.

Of course what I’d really like to do is paint a fullscreen window, with a textured poly for each ‘parent’ window and paint the app windows into the middle of them. But there’s not ‘real-time’ way to do this.

Anyway, I’m off to try and get single-buffering to work. Thanks for all your input.

Yes I thought of the root window approach problem earlier today.But if there’s no performance/memory penalty for many contexts you could try setting up four windows surrounding the app window,one for each edge.Have a look at how Enlightenment is doing it.

FYI:

I was able to get single-buffering to work (not creating the window correctly). It helps a lot, but seems to point to the same ‘I’m filling up vram’ theory. In single-buffer mode I can open just over twice the number of windows I could with double-buffering before render time jumps drastically. This combined with the use of smaller gl windows (just the size of the borders) is sufficient to draw many windows on the screen without a problem (over 100 by my estimates). Thanks for all your help zen.

weston