Query GPU capabilities

I think it would be nice to have the possibility to check some GPU’s capabilities using glGetIntegerv / glGetFloatv:

  1. memory accessible to GPU - this could be separated into categories like on-board memory and agp memory, but since OpenGL is platform independent these categories could be general (level0, level1, etc.). Software implementation for example would assume level0 to be system memory.
  2. GPU core clock frequency
  3. GPU memory clock frequency
  4. Number of vertex processors
  5. Number of fragment processors
  6. Memory bus width
  7. Shader model supported (an integer with major/minor version or a string: “major.minor some other stuff”)
    Of course these are just examples of values that can be queried.

This would only require defining new tokens for use with glGetInteger / glGetFloat and should be relatively easy to implement in the driver.
I think that knowing not only GPU capabilities, but also it’s speed can be helpful for games that auto-adjust level of detail. Yes, you can use benchmarks but why not have more legitimate way for it?

Very good idea but : Shader model

:eek: => :mad:
That’s just an commercial title from Direct3D !

For the rest, it could be fun, but is it release usefull ?

For example: 5. Number of fragment processors.
GeForce 7900 should return 24 and X1900 should return 48 but what next? You want to select a lower quality code path for the 7900? GeForce 7900 have just more complex pipeline than X1900 one and provide nearly the same efficiency.

By the way this story of pipeline will change with technical improvement (unified shader for exemple). It’s the IHV choose not the API one

Well, basicaly this is just a starting point for discussion - maybe we are able to come up with something usefull.

And about that shader model version - it’s very usefull to know if GPU can do floating-point filtering/blending, vertex texture fetch, dynamic branching and so on.
On NVIDIA you can check for NV_vertex_program extensions, on ATI you look for different extensions. Vendors seem to follow the ‘Shader model’ trend - they add certain groups of functionality at once - I don’t know any SM2.0 card capable of float filtering/blending, VTF or dynamic branching.

I have to agree, that estimating fragment processor efficiency is difficult due to different architectures. And we still have to keep in mind that OpenGL is more than juat an API to access ATI or NVIDIA GPU. Still, providing developer with some helpful hints would be appreciated I guess.

Perhaps instead of core/mem frequencies we should ask for FLOP’s and fill rate? These would be more general and perhaps even more usefull.

I think it would be nice to have the possibility to check some GPU’s capabilities using glGetIntegerv / glGetFloatv:
No, that’s not going to happen, so you may as well give it up.

Even D3D is moving away from querrying hardware capabilities. OpenGL is certainly not going to go back to a place that D3D left. And not even D3D was crass enough to ask a GPU what it’s clock speed was.

No, that’s not going to happen, you may as well give it up.

Perhaps you’re right, but since OpenGL does define such terms as texture memory then why not have an interface to get amount of that memory?
Well, another thing is that OpenGL is defined at reasonable level of orthogonality but GPU’s aren’t for sake of performance. And this is why OpenGL does not define any way to check if floating point textures can be filtered. They just should if they’re supported (I might be wrong - I didn’t go through all the specs related to this topic :wink: ).
Since we all know that SM3.0 cards support that then having the ability to check it would be great. Actually you have that possibility on NVIDIA, but on ATI you must check for some extension that is known to be exposed by SM3.0 hadrware only. This is why I sometimes find OpenGL’s API insufficient for real-life GPU’s. Perhaps we will be able to avoid such issues in future.
Still, I’m not switching to DX with my projects :slight_smile:

I think that knowing not only GPU capabilities, but also it’s speed can be helpful for games that auto-adjust level of detail. Yes, you can use benchmarks but why not have more legitimate way for it?
How would you decide level of detail using all this info?

If GL could tell us if something is soft emulated or hw accelerated, I think it’s enough.

If you find that some hw accelerated feature yeilds poor performance, then you can turn it off dynamically (use FPS).

since OpenGL does define such terms as texture memory
But it doesn’t say that this “texture memory” has an upper bound. It is simply a generic term for where texture object data gets stored.

How would you decide level of detail using all this info?

Having predefined settings for a few GPU speeds would probably be enough.
Besides, relation between core speed and memory speed gives some idea about optimal fragment shader complexity (but I guess only few people would be interested in this).

If GL could tell us if something is soft emulated or hw accelerated, I think it’s enough.

Actually I don’t care if something is HW or SW - I just want to know how fast it works. Some per-vertex operations can run reasonably well in SW.
Well, one thing is sure - info on GPU speed is useless unless you know wchich features will run in SW and wchich in HW.

Measuring FPS is what most games do to auto-adjust level of detail, but it requires loading all geometry/textures and still you don’t know if low FPS is caused by using too much texture memory or too complex shaders unless you try different combinations.
Instead of running such test you can use some simple tests to determine GPU’s fillrate / pixel shader efficiency. But why measure something that could be queried?

Ok, let’s leave these GPU-speed related talks for now. What do you think about knowing amount of memory available and shader model version?

But it doesn’t say that this “texture memory” has an upper bound

Good point there.
On the other hand we do have GL_OUT_OF_MEMORY token defined, right?