ATI slowdown on newer drivers with GL_SELECT

Hi folks,

With Blender ( http://www.blender.org ) there seems to have been a general acceptance that ATI cards were very slow at certain things ( mostly selection ).

Someone did a bit of research, and found that there was a basic issue with the GL_SELECT functionality speed - older drivers worked well with this OpenGL feature, but newer drivers are drastically slower.

As proven in the article at http://www.cs.usyd.edu.au/~tapted/slow_glselect.html, these performance issues were only introduced in fairly new versions of the drivers.

This issue affects users of 3D modelling packages, comments can be viewed by Blender users on this very issue here…

http://www.blendernation.com/2008/03/12/ati-slowdown-explained/

What is the best way to inform ATI developers of this issue? I posted a bug, but I’m being given the typical support brush-off with regards getting answers ( ie install the older drivers ) rather than a solution ( the dev department are aware of the issue, and are working on a solution )

Best regards…
Mal

Discussed here: http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=233024#Post233024

My point of view: don’t use old deprecated features like selection mode.

What is the best way to inform ATI developers of this issue?

You already have. You posted a bug. One that ATi doesn’t care about, and almost certainly knew about before you posted it.

Selection mode is functional. That is probably all that ATi will promise about it.

the dev department are aware of the issue, and are working on a solution

The ATi Driver Development department has better things to do than work on software-only features like selection mode.

Hi folks, thanks for the info.

> don’t use old deprecated features like selection mode.

When did GL_SELECT become depreciated in OpenGL, and where would I get more details on why this decision was taken?

Also, did they come up with an alternative OpenGL solution ( possibly hardware accelerated ) to replace this depreciated feature?

> The ATi Driver Development department has better things to do
> than work on software-only features like selection mode.

It’s a fairly important feature for a lot of 3D apps ( and one that used to work well ), so it’s strange that it should work at such a poor speed given the increased speeds in new hardware ( both CPU and GPU ).

Of course, the reality of this all is that users of 3D modelling software using OpenGL and this feature ( Blender, Maya etc ) are experiencing huge performance issues ( selecting objects and faces is a fairly common task, sometimes several times a minute or more ).

They are asking questions, and are getting the reply that the OpenGL drivers are the problem ( or in this case, the ATI OpenGL drivers ).

Mal

Selection Mode became inherently slow in the very moment VBOs where introduced in 2003. Thats 5 years in the past! :wink:

There are far better methods for selection, like the itembuffer method.

Dear Mal, the point is, that while the selection mode is still part of the API, it was never designed with speed in mind. It is an old “convenience” feature that actually doesn’t belong in the API. The basic idea of OpenGL is to provide access to hardware accelerated computer graphics, and the way selection mode works contradicts the very design of all graphics cards developed in the last decade. So don’t be surprised if hardware developers don’t care about the part of API that are not relevant to hardware acceleration. If a recent developed application still uses selection mode, I can’t help myself but accuse it’s creators of ignorance of latest trends in computer graphics. Self-made selection (with raytracing or unique colors) is easily done and will be faster.

What is an itembuffer? Is that an OpenGL feature. GL_SELECT is certainly a useful feature for 3D user interface widgets. I am curious to hear about this itembuffer. I am not looking forward to selfmade selections with raytracing when it could be done so elegantly using the selection mode.

Jochen

Well, then you will have to live with the bad performance then. This is just as simple as that.

@Jochen:
Basically, you just render your scene into a very small 32bit-FBO (can be 1x1 pixels). Each object (“item”) gets rendered in a color corresponding to its own ID (very brave persons can even squeeze a 32 pointer in here, I don’t recommend that though).
In order to prevent GL from somehow altering the IDs, you have to turn off texturing, multisampling, lighting and so on. The easiest is to just use a shader which does nothing more than to pass gl_Color of the vertex shader right to the fragment.

Afterwards you read back the color at position (0,0) and in this way get the ID of the object that is nearest at the mousecoursor(the normal depthtest still works as usual). You can even readout the depth of the pixel, unproject it and in that way get the worldposition of the hit.
Use gluPickMatrix() and glViewport() to setup projection and to reroute the mousecoursor to window coordinate 0,0. This very narrow frustum can be used for frustum culling too, which speeds up the picking further.

Ok, I get it! Thx for the explanation. I am building an editor with 3D interface widgets that have all kinds of buttons and handles you can pull. All I used GL_SELECT for was to find out which part of the widget is first clicked on. While the mouse is down that part remains active. The method you outline will work well for that purpose.

I will not have to do frustum culling during picking because I am only drawing the interface widget at that time. But I am still not quite sure about the effect of gluPickMatrix and glViewport. If I am using a 1x1 FBO then the viewport would have to be the same size. I think that restricting the picking region using gluPickMatrix would probably give me more precision. Anyways, I’ll have to think about that for a while and play around with it…

there’s no reason to use any sort of picking for UI; the z-order of your windows is implicit in the tree traversal order. All you need is a IsPointInRect function on your way to the top…

btw, i’d use simple lineseg intersections for nontrivial geometry collisions. If you’re coding a game you’ll need that sort of thing anyway. Greasy fast if implemented with care.

Wats the performance of re defining the VAO/VBOs for changing the color attributes? or is there a better way