drawing my ui

I’m creating a user interface for one of my programs, and I want to draw it as efficiently as possible. The ui consists of a few layers :

  • screens : can be thought of as backgrounds in which you can draw anything you want. Are as big as the window.
  • panels : floating panels in which anything can be drawn. Are not as big as the window, and can overlap.
  • buttons : can be placed in panels (not in screens though).

Any of these layers (except screens) can be transparant, semi-transparent or not transparent.

I want to draw the Gui as efficient as possible (I don’t want to redraw everything each frame). So if only a panel has to be redrawn, I don’t want to redraw the whole screen too. If only one button has to be redrawn, I don’t want to redraw the whole panel and screen. This is not easy though when the buttons and panels are semi-transparent. So does anyone have an idea how I can solve this problem ? It has to work on all graphics cards that support openGL (if possible).

Thanks in advance.

I think you can’t do that, you’ll have to redraw it each time :frowning:

Well you can use glScissor to set a rect area of your viewport to which you will draw.

However you will still be sending all draw calls for other GUI elements… To prevent this you can define something like invalid rectangle which will be updated each frame… then you will test whether some part of the specific screen or whatever is in this rectangle and if not then you will not draw it ( and no of it’s child nodes too )

But what if I have to redraw a button ?

Then I have to

  • redraw a small part of the screen (right under the button)
  • redraw a small part of the panel (more panels if they overlap).
  • redraw the button.

This workflow is almost impossible, especially if you have complicated drawings in the screen and/or panel.

Why do you think so?

You just have to set glScissor and invalid rect. to the bounding rect of the button.

If some part of the screen needs to be redrawed you will simple redraw it whole ( glScissor will ensure that only the part in the rect is redrawn ). But you will not redraw all of the screen’s child elements ( panels ). You will just redraw the panel which has some part in the invalid rect… For the panel’s child elements you will do the same

I don’t think that you can make some easier solution when you can have overlapping+transparent elements.

Ok, maybe you are right, but if you redraw the whole screen using glScissor, you still send quite a lot of openGL calls (if the drawing is complicated). Or dou you think this won’t matter ?

No, I don’t know how do you have your data structured but generaly you call drawing functions only for these parts which are in the invalid rect.

For example in my engine almost every part of the GUI has it’s own element and is drawn by calling only very few openGL calls. So when I’m redrawing for example a window which has some part of it in invalid rectangle then I redraw usualy only the window’s background and then I check all window’s child elements whether they are in that rect and so on…
so if I for example need to redraw some button on the window I would idealy call only two GL functions ( one for window’s background and the second for drawing the image of the button ). The other child elements of the window would not be drawn so no GL calls would be called

I understand that, but I don’t want to always redraw the whole window’s background (I’m not making a game).

take a look at blender’s source. it’s a 3d modeller that does all it’s own ui stuff with opengl…

yeah blender renders UI directly to the front buffer, so no need to redraw everything for a button press.
It has problems with some broken GL implementations (some OS X versions).

I knew about blender already (I’ve been using it for more than 5 years btw), but I can’t use the front buffer for that purpose. The elements in my ui should be able to contain quite complex openGL scenes (the background is also considered as a ui element).

Denepending on the complexity, redrawing everything may not be that slow.

Else I think the best modern way is to render all complex scenes (you cal them screens) in separate FBO s, then get these complex scenes into textures, and only composite in realtime screen-textures/panels/buttons.

That seems to only possible way if you want to layer transparent stuff.

compositor… that’s what I’d do, if i needed such a thing and I had a year to burn.

With .NET around, I doubt I’d go to this much trouble over UI. You can always render custom controls then let the OS do the grunt work.

well you asked about transparent stuff … it much easier to use the hardware in this case :

yeah I did a UI like this for a game editor I was working on, but I ended up tossing it. Too much time in maintenance and it never really felt “polished” enough. You can pull off some mighty slick effects though…