Are you honestly asking for detailed specifications, including cache sizes and memory bandwidth, for every single GPU operation? How would a card describe bad anisotropic performance? What exactly is it about that card that makes the anisotropic performance “bad”, relative to some standard?
And what happens if a certain specification can’t be guaranteed? At best, bandwidth is a maximum value. It can be influenced by innumerable factors: how much contention there is for memory through that channel at that time, etc. It would be very easy to think that a high sampling bandwidth number automatically means “lots of big, simultaneous textures,” only to find out that it doesn’t mean that for other reasons.
In any case, predicting application behavior from raw specifications is always highly dubious. Many a developer has thought, based on a spec, that various operations would be too slow to use, or that other operations would be fast enough to hammer hard. And when they got the actual hardware, they would often find their assumptions to be completely wrong.
It’s one thing when you’re talking about elements that represent the basic nature of a renderer. TBRs are fundamentally different from the standard model. But the more information you provide, ironically, the less you really know about the hardware.
And even that all assumes implementations won’t be willing to lie to you.
Why would an implementation lie? Well, nobody wants it widely known by verifiable information that their texture memory controller has 20% less bandwidth than their competitors. Implementations will therefore have a good reason to inflate their numbers. Would anyone be able to tell the difference without doing “bad perf tests”?
Also, an implementation could lie due to stupid developers. For example, it’s entirely possible that an implementation might lie in order to force a popular application to use a certain codepath, because that codepath actually is faster on that hardware than the developer thought from their spec analysis. Developers aren’t perfect, and sometimes they’ll do the wrong thing.
You cannot effectively make accurate, a priori decisions based on information of dubious accuracy. Which means you’re still going to have to go and actually check to see if a certain set of rendering operations really is faster.
And if you don’t think IHVs will lie for either of these reasons, you’re way too trusting.
Furthermore, if forward compatibility is your concern, then detailed specs aren’t helpful to you. Consider a world where TBRs never existed. Then suddenly, someone comes out with one. Well, Vulkan’s API would have no way to tell you that it’s a TBR, and therefore you will assume there’s a problem because you see terrible write bandwidth. But TBRs don’t need huge write bandwidth, by their very nature. To fully understand the value, you would need to interpret the specification differently. But there’s no way to codify the notion of TBR in that API; you’d need some kind of extension, and you would need to radically change every application that uses this spec data.
At least by doing it Vulkan’s way, they have a single, extensible value that represents a particular kind of renderer. If a new one shows up, then it uses a new value, and developers will use a fall-back case until they learn how to do the right thing.
Remember: premature optimization is the root of all evil. And the only possible use for the kind of information you’re talking about is premature optimization. So I would say that the best thing you can do is continue to write code based on empirical evidence.