antialias rubberband disappears

I’ve found that on my GeForce 8600 GTS and other cards that support fullscreen (forced) antialiasing my “XOR’ed and drawn to the front buffer” rubber band disappears. If I turn off forced antialiasing the rubber band works fine. Is this because hardware implemented antialiasing is done in even increments: 2x, 4x, …? What ever the cause does anyone have a solution for this problem?

Well, rule number one, never draw to a front buffer on a double-buffered context. No idea what you mean with “XOR’ed and drawn to the front buffer” though…

Isn’t drawing to the front buffer the standard way to do rubber banding (assuming overlay planes are not used)?

Regarding XOR’ed, this is from the OpenGL FAQ:

The other approach is to render with logic op enabled and set to XOR mode. Assuming you’re rendering into an RGBA window, your code needs to look like:

glEnable(GL_COLOR_LOGIC_OP); glLogicOp(GL_XOR); 

Set the color to white and render your lines. Where your lines are drawn, the contents of the framebuffer will be inverted. When you render the lines a second time, the contents of the framebuffer will be restored.

The logic op command for RGBA windows is only available with OpenGL 1.1. Under 1.0, you can only enable logic op in color index windows, and GL_LOGIC_OP is passed as the parameter to glEnable().
I’ve never tried the overlay plane approach, are overlay planes widely supported these days?

The main reason you’d want to draw in XOR mode to the front buffer is the buffers aren’t flipping as often as your rubber band can move. It’s legitimate to do so if you want to idle the 3D rendering until something 3D changes, though in most apps (apart from CAD), something is always moving. So most just redraw the scene at whatever FPS with a new rubberband in the back buffer. If your FPS is low, I can understand why you’d want to draw to the front buffer.

As for why the rubberband doesn’t show up, I can’t tell without knowing the AA method in more detail. You can try increasing your line thickness by 4 or 8x to see if it’s a subsampling issue (line may be there, but faint), turning alpha on and off, or using a simpler write-only drawing mode (XOR needs to read the color for that “pixel,” which may not exist as a single value) to test.

I wouldn’t bother with overlays for something like this.

Ok, sorry, I missunderstood your question…

I believe that it is wrong to render to the front buffer, because it may trigger problems with some modern windowing systems (like Vista). Besides, it is not “nice”.

About your problem: a multisampled buffer has to be resolved before displaying (converted to a single-sampled buffer). The resolving of the back buffer usually happens on SwappBuffers. It is very possible that the driver just ignores rendering to the front buffer in this mode, or that the front buffer is never getting resolved.

I would just draw the rubber band in the back buffer above the scene, but as Cyranose pointed out, if your FPS is low, this won’t look good.

Thanks for the replies. It is a CAD type app and I do want the 3D rendering to idle while rubber banding.

I thought changing the line thickness would work too, but no dice. Alpha does not affect it either. Interestingly, this occurs on Vista and XP and on both ATI and nVidia cards that support hardware antialiasing. This leads me to believe that it’s definitely something I’m doing wrong (or not understanding).

Also if I use SwapBuffers( … ) I can see the line but I get random ghost lines on an ATI/Vista setup. Here’s the code:

void CGLInvertedLine::invert( HWND hWnd, CDC* pDC, double ptSize )
	CRect wndRect;
	GetClientRect( hWnd, wndRect );

	bool doSwapBufferMethod = true;
	if( !doSwapBufferMethod )
		glDrawBuffer( GL_FRONT );
	glOrtho(double(wndRect.left), double(wndRect.right), double(wndRect.bottom), double(, -1., 1. );
	glDisable( GL_DEPTH_TEST );
	glColor3f(1.f, 1.f, 1.f); 
	glPointSize( float(ptSize) ); 
	glLineWidth( 1.f );


	glBegin( GL_LINES );
		glVertex2s( short( m_curStartPos.x ), short( m_curStartPos.y ) );
		glVertex2s( short( m_curEndPos.x ), short( m_curEndPos.y ) );

	// finish - restore state
	glEnable( GL_DEPTH_TEST );
	if( doSwapBufferMethod )
		SwapBuffers( pDC->GetSafeHdc() );
		glDrawBuffer( GL_BACK );

	// toggle the visibility state
	m_bIsOn = !m_bIsOn;

Boris, I don’t see anything obviously wrong with the code, though I don’t know what’s in those custom functions you call.

As for Zengar’s suggestion of rendering into the back buffer only, it’s probably safest anyway to make that work. If framerate is an issue, consider not clearing the scene and using the XOR to draw/undraw the rubber-band in the back buffer as you probably were doing in the front. On systems that don’t copy back to front but really swap, you’ll have to keep the two buffers in sync when flipping back and forth.

As for the random ghost lines, maybe a picture will help. And do try it without the XOR to see if it’s a AA buffer read problem…

On systems that don’t copy back to front but really swap, you’ll have to keep the two buffers in sync when flipping back and forth.
Actually, the only guaranteed way to make this work is to take render to a buffer object and then copy that to the back-buffer. By the GL spec, the back-buffer’s contents after a swap are undefined. So it isn’t just flip vs. copy.

P.S. On Vista Nvidia doesn’t support rendering to front buffer AFAIK. And I sll think that rendering to the front buffer will never be visible in AA mode due to buffer resolve :wink:

So you guys are saying that the method suggested by the OpenGL FAQ is no longer valid, correct?

Zengar, Nvidia may not “support” rendering to the front buffer in effing Vista but it works as long as hardware AA is not on. ;~)

Also, I have not had any reports of problems using the SwapBuffer method yet. The front and back buffer seem to be in sync. I’m thinking that the way I’ve done it with SwapBuffers is the way to go, what do you think?

BTW, I might have to add this to my signature:

“All I wanted to do was draw a stupid rubberband…”


I would say that drawing a rubberband in a modern way would involve copying to texture the scene, then drawing rubberband + textured quad of the scene.

Just if t works on your particular computer with your particular card and particular driver does not mean anything :slight_smile: There will be several things in the FAQ that don’t really apply to modern world, as it was written a long time ago, when GL was still a workstation thing (where artifacts like selection/feedback mode come from).

You’re absolutely right Zengar, I would never assume that it would work on other cards.

Yes, the FAQ was written back in the day. So what’s the consensus on the proper “modern” way to do rubberbanding? I know you don’t usually do it in games but man it’s crucial in CAD and it absolutely must be fast!

I like your creativity ZbuffeR (assuming you’re serious) but jeez, quite a hack to accomplish such a common(?) task ;~)

What about this: you render your scene to a FBO (only when something changes), and in your window you render a fullscreen texture with the rubberband on the top. This way you should have good performance, because one usually usues rubberbanding only when the scene doesn’t move, right?

I think that’s what ZbuffeR was saying right? And yes, as long as the scene does not move then you’re good to go.

I have an app where it is useful to zoom and pan while drawing in which case a redraw is unavoidable (but I’m ok with that).

What about overlay planes? That seems like a better solution. Are there any issues to be aware of with those?

AFAIK, overlays are only supported on professional hardware.


While I understand what you’re trying to say;

we likely all have professional hardware, as it costs (sometimes serious, even for “gaming” cards) money.

The opposite would be creations of love, amateur hardware.

Comparison: Mesa 3D is in an amateur project, and likely that’s the reason it has in instances way better standard conformance than professional implementations.

Implementations from the big two are professional, and due to that they have reduced speed on/for SMP systems (where an out-of-love implementation would run faster), made machines die (display BSODs, at best), overwritten kernel mode memory, hung busses, crashed processes using legal GL calls, both over-writing and -reading memory (in your process as well as in kernel mode), and sometimes their drivers “crashes” their own cards (VPUrecover anyone?), and who knows what else these things do (they are usually completely undocumented).

That said; for OP (Boris008) it seems to me the simplest solution would be to just grab the display area to a texture (I’m indeed assumming it’s static during rubberbanding) and use it for an ortho quad when rubberbanding. At least that’ll work on all current and future hardware.

To comment on the FAQ question raised by OP:
If this goes against the FAQ; is it the FAQ that is (and has always been) wrong or venturing into “unspecified behaviour” territory, has OpenGL taken a turn with antialiasing making previously functioning operations now dysfunctional, has vendors given themselves too much libery and breaking documented previously functional methods, or is it something else? I don’t know, but if current HW can fail on those documented operations, the FAQ obviously should be updated to mention this.

It’s an interesting perspective that “professional” equals tons of problems. Unfortunately a Mesa comparison isn’t apples to apples as the only hardware supported drivers are the DRI ones. It has the advantage of not having to tweak its internals every hardware generation and has been developed and improved continually for over 10 years. This is a very different situation from “professional” driver development.

I have understood that the professional grade hardware from the big two have better GL support but I can’t personally verify that.


I am not really sure what you are trying to say :slight_smile:

I used the term “professional” to refer to so called “professional-cards” (as used in workstations) like Quadros and FireGL (or whatever it is called). They have usually somehow different feature said then “normal” gamer-oriented cards. For example, overlays are not needed for gamers, so they are usually not supported.

Thas said, there is absolutely no opposite to “professional hardware”, it is just targeted at a different market as “gaming hardware”.

It looks like this post should have been in the windows forum, sorry.

Here’s what we have so far. Because of Vista one:

  • can’t (legally) draw to the front buffer

  • can’t swap buffers because back buffer may not be in sync with front

  • can copy scene to texture then draw a textured quad representing the scene then draw a GL_LINE over the textured quad (WTF!)

  • can use overlay planes (but only on some hardware)

  • can render scene to a FBO and swap it instead of the back buffer

[sarcasm]Thank you Microsoft[/sarcasm]

IMO this sucks balls ;~) I’ve got this neat little class that works on EVERYTHING but Vista. It can handle rubber band drawing and allows the user to pan and zoom while rubber banding. Any of the above solutions seem more complex and obfuscated than my current solution. Not that I would dare complain!

Am I missing anything?