Overlay plane alternative(s)

Ideally I’d like to be able to use overlay plans in my application, but after searching these forums and OpenGL newsgroup, I have learned that there is little support for this on most PC graphics cards. We have a couple of machines in our office that support overlay planes, but most do not --which confirms what I read.

So is there a good way to duplicate the functionality of overlay planes in OpenGL? I’m certainly aware of use of XORing to draw non-destructively, but this doesn’t always produce the best results. I’m also familiar with the KTX_buffer_region, which could be used to save the scene and then restore it as needed. However, I’m not sure if this would be as responsive as I’d like.

What I need to be able to do is draw lines over a potentially complex scene without modifying the scene itself. The number of lines could be as few as one or as many as a thousand or more.

Is there anyway to accomplish this other than the methods mentioned above?

Thanks in advance.

Render the scene, copy it into a texture - then draw a window sized quad with that texture from then on…this will allow you to draw all your lines, or whatever, with only the expense of drawing a window sized quad instead of the whole scene.

The nice thing about textures, is transparency. You can have multiple layers where parts are transparent, and you render one layer on top of the other.

With buffer region, I dont think you can do this.

As for overlay, underlay … I think transparency is possible but I have zero experience on that … current hardware being the cause

PS: its GL_ARB_buffer_region now, KTX may be removed.

V-man

Thanks for the input thus far. I should have mentioned that I’m familiar with the texture quad method as well. The problem with this method is that if my application is taking up most of the screen and I’m rendering the entire scene in a textured quad, it can sometimes feel a little sluggish on some lower end graphics cards. I also recall that each card has an upper limit on the size of the textures – I don’t know if in real life this a factor or not (I’m not an expert with textures yet). But, if I had to create multiple textures to handle the entire scene then certainly the speed would suffer.

I’ve used the overlay planes on an SGI and they were perfect for this kind of application – I just want them for the PC too

Is there no other way to do what I want? Again, the temporary data will always be simple lines – no surfaces at all.

Strange that everyone always recommends the textured quad thing.
I’ve had quite decent results with glDrawPixels, as long as it’s exactly
glDrawPixels(x,y,GL_RGBA,GL_UNSIGNED_BYTE,stuff);
All other types/formats are slow as hell.

This takes about 12~13ms for a full 800x600 blit in windowed mode, with either 16 bit or 32 bit color depth, doesn’t matter.

That’s on an ATI Radeon 1 (!!), I haven’t tested it out on NVIDIA (all caps!) hardware yet. Is it so bad?

on gf2mx drawpixels is about 30million pixels a second
but on a tnt2 its only about 0.5million pixels a second!!

oddly my vanta (slow tnt2) gives better performance than my gf2mx200 with readpixels

Originally posted by zed:
[b]on gf2mx drawpixels is about 30million pixels a second
but on a tnt2 its only about 0.5million pixels a second!!

oddly my vanta (slow tnt2) gives better performance than my gf2mx200 with readpixels[/b]
Maybe the gf2mx is more deeply pipelined and so the turnaround penalites are harsher ??? Just a guess of course …

Not that ReadPixels perfomance would matter in this case anyway. It’s roughly half speed compared to glDrawPixels for me, but that’s probably an apples to bananas comparison because it’s out of a working app that does other things as well. My DrawPixels figure is solid though.

[This message has been edited by zeckensack (edited 05-09-2002).]

Some more fun numbers:
glDrawPixels(800,600,GL_DEPTH_COMPONENT,whatever,…) takes about 640ms (!!) on my system and thus qualifies as junk.
Oddly enough,
glReadPixels(0,0,800,600,GL_DEPTH_COMPONENT,doesnt_matter_either,…) takes an astounding 200ms.

In conclusion, and to get back on topic,
if you want to restore a color buffer, glDrawPixels performs quite acceptable. glReadPixels might be a tad slower but it doesn’t really matter as you’ll usually don’t need it every frame.
If you want depth values, KTX/ARB_buffer_region looks like the cleanest way to do it and probably performs well.
(anyone care to spill some performance numbers for depth textures? )

Regarding drawpixel performance:

A couple of years ago I wrote a test app which measured drawpixel performance. I found that on my then current card (ATI Rage128) it flew along in a 16bpp desktop and absolutely crawled in a 32bpp desktop (talking Win98 here), about 20x slower.

Now the interesting bit. I tested the performance on a 3dfx Voodoo3 and an S3 mobile chipset - same issue (16bpp great, 32bpp about 20x slower). I even got some chaps on the opengl game dev mailing list to test it out and IIRC the Nvidia cards of the day did exactly the same thing.

It was almost as if they had all used the same code base to start developing their OpenGL drivers and had all inherited the same problem with 32bpp drawpixel.

I emailed dev support at 3dfx, Nvidia, ATI and S3.

ATI responded with a fixed dll )

3dfx responded with a stock ‘thank you for you email’ reply

S3 responded with ‘oh yes I’ve found the problem but we wont be optimising that path anytime soon - we don’t have the resources’

Can’t remember what Nvidia said.

Rob

I must dig out that test app and run it on my Geforce3.

Looks like my original question brought up the timeless question of textures vs. glDrawPixels. I know this varies from system to system, but on one of the slowest graphics cards we have in-house (Vanta/Vanta LT) the fastest way to repair the window is with the KTX_buffer_region, followed by the textured quad method with glDrawPixels being the slowest.

Back to my original question – is there no other way other than XORing or saving the scene and then restoring it? I guess there isn’t a way to emulate overlay planes???

Draw your lines with blending enabled, with something like:-
glBlendFunc(GL_ONE_MINUS_DST_COLOR,GL_ZERO);
This will cause your lines to invert the framebuffer wherever they are drawn - drawing them again will effectively erase their effect.
Not nice, but it’s something.

knackered – Thanks for the suggestions you have provided. It soulds like the blending method is similar to XORing. I wonder if it produces better results or not. I may give this one a try and see what happens.

I was hoping to be able to draw my lines in one solid color, rather than the XORed affect you get – especially with a complex scene.