(Sorry if this has come up before, but I couldn’t find anything on Google, nor the (throttled) search function on the forum).
I know I can query the hw for max number of interpolants (vp->fp variables), but how do I verify that my shaders don’t overuse the resources (without actually running them on a min-spec hw?)
We hit a bug today due to this, and my dev system is fancier that some targets, so I’d like to find this as soon as I write my shaders.
I realize that with a scalar architecture (i.e. G80) the result may differ vis-a-vis a “classic” vectorized GPU, but is there some way to get at these things at all? (Proprietory and whatnot?)
I could of course write a GLSL parser myself, but …