what about call to determine if extension is supported on hardware?

implementation may process some extensions faster than others (in most cases these extensions are supported on hardware) what about call to check it, for example

GLboolean glIsEffective(GLchar *extension_name)
or
GLboolead glIsEffective(GLenum feature)

Originally posted by miko:
[b]implementation may process some extensions faster than others (in most cases these extensions are supported on hardware) what about call to check it

Great Scott! Why did nobody think of this before?

miko, this is probably the most common request on this form. (Even more common than glAudio/glCollisionDetection/glMakeTea etc). Personally I agree with you - in an “is this going to be pathologically slow”, rather than “is this going to fall back to software” sense - but the hardware guys have always made it very plain that hell will freeze over before they support it.

Ultimately I suspect it comes down to marketing rather than technology reasons.

yup! i forgot there’s a search link

One of the most important problems (IMHO) on this topic is the ability to define the correct granularity and the “queryiability” without multiplying tokens (string is a good alternative though).

With that said, I also agree it would be nice if something was done about it !

vincoof, I wasn’t envisaging a token-based interface along the lines of “glIsDogSlow(GL_SOME_SPANGLY_FEATURE);”. Very often it’s combinations of features that knock you out of the fast path, and a query interface supporting the whole combinatorial explosion would be impossibly messy.

Rather, I was thinking that you’d set up GL state as normal, then call “glWouldSendingVerticesRightNowBeDogSlow();”. This way you could start with a simple, known-good state, then progressively enable features until it barfs, at which point you’d back the last enable out and either go with what you’d got or try a different feature.

The point is that a query function here would be very very fast - shouldn’t have to talk to the card, even - whereas actually running a test scene with a given state would involve a lot more app effort and would have to be run for a significant length of time to get reliable timing results. Most people aren’t going to sit there watching an endlessly repeated benchmark scene for hours after installing their shiny new piece of software.

oic it’s a good idea. It’s still difficult to set the threshold between “normal” and “dog slow” but I think such threshold could be ruled by the most “uncontrollable” function ever been in OpenGL : glHint.
With glHint(GL_DOG_SLOW_THRESHOLD, GL_FASTEST), the answer from glIsDogSlowNow() would shortly check it the rendered is software or hardware.
And with glHint(GL_DOG_SLOW_THRESHOLD, GL_NICEST), the answer from glIsDogSlowNow() would additionaly check if the videocard memory is full, if the AGP is free, etc.

The canned IHV answer is usually: “it doesn’t matter whether it’s in hardware, only that it is fast enough.” They suggest benchmarking.

An example would be vertex programs. Vertex programs actually work pretty well in software…until you try to use them with server-side vertex arrays! (VAR, VBO). The only way that works robustly across various hardware/driver combinations is to self-benchmark your code paths. Serious production software should already have a mechanism to do just this, so adding the auto-profiling/extension detection logic isn’t too bad.

-Won

Sigh. Yes, that’s the canned answer. By the same rationale, we really ought to remove the PFD_GENERIC_ACCELERATED flag - after all, it doesn’t matter whether a format is accelerated or not, only that it’s fast enough, right?

I care about this less than I used to, now that there’s some prospect of the API stabilizing somewhere around 2.1 or 2.2 and the current godawful extension mess going away, but I’ve never found the canned answer very convincing.

I’ll just say that the whole ICD thing is messed up. But yes, if it’s fast enough, that should be good enough…assuming things work the way you expect. In reality, different kinds of hardware can have different foibles and varying degrees of out-of-spec-ness. This was more true in the past than it is now. In any case, it is arguable that detecting the implementation of OpenGL does not actually belong in OpenGL, but rather some OS-dependent service. In any case, I don’t see how the canned answer is not satisfactory, because it has a very legitimate reason behind it.

I think the big thing solving the extension mess has actually been the ARB. They are moving much faster than they used to in standardizing, fairly radical extensions. I don’t think you’re going to have to wait till GL2 to see most of the benefits of this. Vertex programs, fragment programs, multisampling and super buffers add a great deal of richness to OpenGL, all through ARB extensions.

-Won

perhaps the problem can be solved only by nvidia/ati/etc. since they are the ones making the video cards, they know which one is hardware-accelerated and which one will not for each of their card…