Stinks like FUD.
What is FUD?
Well, if Doom III benchmark results were presented on nVidia PR event, with Carmack participating in speach tailored to explicitly show how ATI sucks, blaming ATI for necessity of multiple code paths, inventing special ™ names for ATI paths to show them on charts, “warning” about future games unrelated to Doom3, and having a bundle-deal with nVidia - then you would have reasons to question Carmacks crediblity.
The difference is that, had Carmack said so, he would clearly be lying on most of the factual issues. ATi doesn’t inflate the number of codepaths; that distinction belongs to nVidia.
Secondly, Carmack is one man. Valve is a company. One of the reasons I give what they say more weight is that they are a group. Carmack is an individual with his own personal opinions on various matters.
And, in this instance, Valve is, in so far as their factual claims are concerned, 100% right. nVidia’s hardware has known fragment-program issues. We’ve had several threads dedicated to people having disappointing performance with ARB_fp under nVidia hardware. So, even if this is a PR stunt, at least it’s one grounded in facts, not idle speculation or lies.
FYI, I personally believe NV won because D3 uses OGL and because Carmack has a will to fully optimise for HW architecture no matter he likes its design or not. The latter is a thing some coding fanboys keep on resisting to understand.
But nVidia didn’t win. According to Carmack, if both of them use the ARB path, ATi wins. Granted, it’s kind of an unfair test, since we know that nVidia’s hardware is weak in this area. However, it’s not a fair test to compare NV_fragment_program to ATi’s hardware either, since ATi didn’t optimize their hardware for fixed-point operations.
There isn’t really a fair test between these two pieces of hardware. On DX9/ARB_fp, ATi wins because those shaders can’t be optimized for nVidia cards. Under NV_fragment_program, nVidia wins, because nVidia’s hardware isn’t doing as much work as ATi’s.
since, AFAIK, the NV3x supports co-issue instructions too
By “co-issue”, do you mean issuing ALU and texture instructions on the same cycle? If so, you’re wrong; NV3x doesn’t support that.
I cannot help but feel that Valve is whining. This is the second time they have made a big deal about something in DX9. Is it fair for me to feel this way, or is Valve just standing up as a developer and saying they aren’t going to take crap from Microsoft or IHV’s anymore?
They probably are whining. With good reason. Better to complain about a problem than be silent; at least, if you make noise, it might get fixed.
Developing shaders for nVidia’s hardware is hard. Not just because you have to limit your thinking to smaller precision, but you have to spend time playing around with it until you strike upon the correct shader variants that gets good performance. There’s no publically avaliable FAQ for getting decent performance out of it; only some general guidelines that don’t always work.
Granted, I’m sure that, if Valve had asked nVidia to take their shaders and optimize them, nVidia would have. However, there’s no reason that this needs to be the case.
Besides, Valve probably figures nVidia will just put some “optimizations” into their driver specifically for HL2 shaders that will give them the performance they want.
NVIDIA prefers OGL. Look the last official DX9 NVIDIA’s drivers (45.23)… no floating-point texture support… how is this possible when you can create it in OGL without problems???..
“Without problems”? Are you kidding? nVidia only allows floating-point textures with texture rectangles. I don’t know what D3D says about supporting FP textures, but I wouldn’t be surprised to see that it requires full support (all formats and texture types) if you’re going to support it at all.