There is so much misinformation out there about these techniques… my favorite is the claim that HyperZ allows a 1.2 Gtexels/s (theoretical maximum) card to be rated at 1.5 Gtexels/s by saving memory bandwidth. Clearly this is nonsense – 200 MHz, 2 pixel pipes, and 3 texture pipes per pixel implies a maximum of 1.2 Gtexels/s, even if memory bandwidth was not a limitation. Improving memory bandwidth efficiency will only increase the degree to which this maximum can be achieved, and never increase the actual maximum.
It is possible for similar techniques to increase fill rate, but you have to look carefully at the claims to separate the wheat from the chaff.
In the end, these techniques are not terribly relevant to users, and only somewhat relevant to developers. For an end user, all that matters in the end is actual results, not specs or claims of revolutionary technology – so such a technique should be graded on its performance impact and not on its technical details. For developers, yes, these things matter, because they have implications for how you should perform your rendering passes for highest efficiency, but you can safely ignore them unless you are trying to optimize.
I should also note that I have seen benchmarks with HyperZ “disabled”. It’s not clear what this means, but (1) no one in their right mind would ever turn it off, and (2) you can never trust IHVs to provide you with a fair comparison of this kind.
If we provide you with a switch to disable performance feature X, which in theory should have no impact on the actual images produced, what stops us from checking for whether feature X is disabled and adding some delay loops in the driver? If you’ll always turn it on anyway, we can exaggerate the real performance impact.
We don’t have a switch in our OpenGL driver to “disable T&L”. Sure, such a switch would allow us to say “T&L helps performance by this much”, but the results would be questionable at best, because you don’t know exactly what is being compared.