Poor Performance of NVidia Cards

First of all, why is everyone taking this so personal? We’re talking about video cards here, not religion, not economic theory, not abortion. There is not need for name calling.

GPU performance has been leap frogging along the last few years and will continue to do so into the future. Deal with it. 12 months ago Nvidia had the best card out there. Today, ATI probably does. 12 months from now 3DLabs could have the best card. Who knows?

Vertex programming functionality has stabilized. Fragment programming is starting to stabilize. In the future floating point and high level programming will stabilize.

Programming at the bleeding edge is hard. But that’s why it is rewarding.

Can we go back to talking about something constructive?

Ok, here is my opinion:

NVIDIA prefers OGL. Look the last official DX9 NVIDIA’s drivers (45.23)… no floating-point texture support… how is this possible when you can create it in OGL without problems???.. Hey NVIDIA dx9 driver team, wake up from holydays!

And, I think ATI current products sucks cuz no dynamic-control-flow vs2_x/ps2_x support.

[This message has been edited by santyhammer (edited 09-11-2003).]

>> GPU performance has been leap frogging along the last few years and will continue to do so into the future. Deal with it.

Why everybody concentrates on these performance reports? Is the inferior floating point performance of 5900 in DirectX really that surprising?

I’m pissed because an important, respected ISV have joined one fanboy camp. They took part in event which only purpose was to show how one company’s products suck. They actively engaged in spewing FUD. This disgusts me. This has no precedence, not in that league.

Originally posted by Nakoruru:
[b]I cannot help but feel that this forum has gone down hill significantly when people start refering to others as ‘nvidiots’

Such name calling is pretty useless.
[/b]

My mom teasingly calls me a ‘Vidiot’. But, it didn’t bother me. We both thought it was funny.

[b]

The title of this post, ‘Poor Performance of NVidia Cards’ is poorly choosen.[/b]

[b]

I found Valves results to be startling, with the ATI card being 100 percent better.[/b]

I too found the results startling. I don’t know if the results are accurate, but I agree that the title was poorly choosen. I should have made it “Piss Poor Performance of NVidia Cards”.

[This message has been edited by mmshls (edited 09-11-2003).]

Stinks like FUD.

What is FUD?

Well, if Doom III benchmark results were presented on nVidia PR event, with Carmack participating in speach tailored to explicitly show how ATI sucks, blaming ATI for necessity of multiple code paths, inventing special ™ names for ATI paths to show them on charts, “warning” about future games unrelated to Doom3, and having a bundle-deal with nVidia - then you would have reasons to question Carmacks crediblity.

The difference is that, had Carmack said so, he would clearly be lying on most of the factual issues. ATi doesn’t inflate the number of codepaths; that distinction belongs to nVidia.

Secondly, Carmack is one man. Valve is a company. One of the reasons I give what they say more weight is that they are a group. Carmack is an individual with his own personal opinions on various matters.

And, in this instance, Valve is, in so far as their factual claims are concerned, 100% right. nVidia’s hardware has known fragment-program issues. We’ve had several threads dedicated to people having disappointing performance with ARB_fp under nVidia hardware. So, even if this is a PR stunt, at least it’s one grounded in facts, not idle speculation or lies.

FYI, I personally believe NV won because D3 uses OGL and because Carmack has a will to fully optimise for HW architecture no matter he likes its design or not. The latter is a thing some coding fanboys keep on resisting to understand.

But nVidia didn’t win. According to Carmack, if both of them use the ARB path, ATi wins. Granted, it’s kind of an unfair test, since we know that nVidia’s hardware is weak in this area. However, it’s not a fair test to compare NV_fragment_program to ATi’s hardware either, since ATi didn’t optimize their hardware for fixed-point operations.

There isn’t really a fair test between these two pieces of hardware. On DX9/ARB_fp, ATi wins because those shaders can’t be optimized for nVidia cards. Under NV_fragment_program, nVidia wins, because nVidia’s hardware isn’t doing as much work as ATi’s.

since, AFAIK, the NV3x supports co-issue instructions too

By “co-issue”, do you mean issuing ALU and texture instructions on the same cycle? If so, you’re wrong; NV3x doesn’t support that.

I cannot help but feel that Valve is whining. This is the second time they have made a big deal about something in DX9. Is it fair for me to feel this way, or is Valve just standing up as a developer and saying they aren’t going to take crap from Microsoft or IHV’s anymore?

They probably are whining. With good reason. Better to complain about a problem than be silent; at least, if you make noise, it might get fixed.

Developing shaders for nVidia’s hardware is hard. Not just because you have to limit your thinking to smaller precision, but you have to spend time playing around with it until you strike upon the correct shader variants that gets good performance. There’s no publically avaliable FAQ for getting decent performance out of it; only some general guidelines that don’t always work.

Granted, I’m sure that, if Valve had asked nVidia to take their shaders and optimize them, nVidia would have. However, there’s no reason that this needs to be the case.

Besides, Valve probably figures nVidia will just put some “optimizations” into their driver specifically for HL2 shaders that will give them the performance they want.

NVIDIA prefers OGL. Look the last official DX9 NVIDIA’s drivers (45.23)… no floating-point texture support… how is this possible when you can create it in OGL without problems???..

“Without problems”? Are you kidding? nVidia only allows floating-point textures with texture rectangles. I don’t know what D3D says about supporting FP textures, but I wouldn’t be surprised to see that it requires full support (all formats and texture types) if you’re going to support it at all.

Originally posted by FSAAron:
[b]
>> GPU performance has been leap frogging along the last few years and will continue to do so into the future. Deal with it.

Why everybody concentrates on these performance reports? Is the inferior floating point performance of 5900 in DirectX really that surprising?

I’m pissed because an important, respected ISV have joined one fanboy camp. They took part in event which only purpose was to show how one company’s products suck. They actively engaged in spewing FUD. This disgusts me. This has no precedence, not in that league.

[/b]

There was a previous article on Tom’s Hardware http://www.tomshardware.com/graphic/20030714/index.html which said that the 5900 “was the fastest card on the market.”

If the Valve results are accurate, then I thank Valve for enlightening at least Tom’s Hardware and a lot of end users.

I am having trouble understanding your anger toward Valve. Who would you rather have in Valve’s place? Or would you prefer having no benchmarks? Maybe everyone should buy based on how pretty the box looks.

[This message has been edited by mmshls (edited 09-11-2003).]

What is FUD?

****ed Up Data.

Maybe everyone should buy based on how pretty the box looks.

What, you don’t do that!? I always determine the speed of a video card by how pretty the graphics of the box are.

Seriously…

It’s too bad the FX cards are so slow in the fragment department. The benchmarks didn’t suprise me all that much but I did expect a little better. And for $500 for an FX 5900 Ultra I would expect a hell of a lot more than what you really get. If this is the kind of performance you get for $500 from nvidia then I think it’s crystal clear what to buy (or what NOT to buy) for Half-Life 2.

Let’s just hope nvidia has some nice improvements in their 50.xx drivers, and on the same note, hope their next card will just burn through those fragment calculations (not literally of course ).

-SirKnight

quote:What is FUD?

****ed Up Data.

I asked this same question when I read this thread (I had seen it before, but I never bothered to look it up.) But when I asked the almighty Google “What is FUD?”, it directed me to a page that says that it is “Fear, Uncertainty, Doubt”. According to this page, it’s a marketing technique that casts doubt on a competitor’s superior product to keep customers from switching brands. Strangely, until I looked into it, I also thought it stood for “Effed” Up Data.

Originally posted by Korval:
By “co-issue”, do you mean issuing ALU and texture instructions on the same cycle? If so, you’re wrong; NV3x doesn’t support that.

I believe NitroGL was refering to the capability to perform a vec3 op and an alpha op in a single clock.

As crap as this thread is I still have to pose two questions…

1. Valve and ATI wouldn’t be partners would they? (Yes I know the answer)

2. What are the results like if you run HL2 on the 5900 with 16bit FP? (Edit - I presume this is the mixed mode? Valve states that this won’t be possible in future titles - Who gives a toss, takes them so long to put anything out that we’ll have Geforce 95000’s by then anyway)

Basically that article just makes me think that I won’t rush out and buy it - Simply because it looks to me like they haven’t spent the sort of effort id have in optimising their product. I’ve found a couple of products that don’t perform (well) on my 5900 Ultra (GP3 and Colin Mcrae Rally 3) - but I can find plenty of others that perform extremely well (especially the GL products).

[This message has been edited by rgpc (edited 09-11-2003).]

Valve and ATI wouldn’t be partners would they?

Which doesn’t prevent Valve from speaking the truth. It only makes them more likely to embellish it.

The material statements made by Valve (as opposed to things like their presuming that future titles would need high-precision floats, which is clearly speculation) are 100% true. The FX line does have these problems with cross-platform shaders. We knew this 5 months ago. Every week, somebody asked, “Why does my FX 5600 not run ARB_fp shaders very well?” And these people are quickly referred back to the Beyond3D boards where shader benchmarks first revealed the problems with FX’s and cross-platform fragment shaders.

Is it FUD when it is the truth?

Simply because it looks to me like they haven’t spent the sort of effort id have in optimising their product.

This the kind of attitude makes no logical sense.

First, D3D’s ability to allow the use of 16-bit precision (via modifiers, presumably what the article refers to as “mixed mode”) isn’t good enough for the FX. What you really need to get performance is to use fixed-point, which neither D3D nor ARB_fp supports. Hence, no matter what Valve does, they aren’t getting good performance out of an FX without writing nVidia-specific code. For OpenGL, there is a potential solution. For D3D, there is none.

Secondly, even if they had a possible solution (which, as I pointed out, D3D doesn’t), time spent building an entire new shader codepath is time taken away from optimizing other parts of their game. HL2 is very CPU intensive, especially with all that physics and so forth they’ve got going on. They have to be able to make that work on 1.6GHz AthlonXP’s (they’d better ), lest they alienate too much of the market.

Lastly, you can, presumably, turn down the graphical spiffiness if your card can’t handle it. One of the purposes of their even bringing this up is so that people aren’t shocked (and ticked off) when they get HL2 home to their FX5600-Ultras, only to find that acceptable performance is not avaliable with the spiffy graphics.

You’re the one who bought the FX5900-Ultra without thinking through the possible ramifications on the fragment shader end. It is unfortunate (especially considering what you paid for it), but you had the opportunity to weigh the avaliable evidence about its cross-platform fragment shading capacity. It’s not our, or Valve’s, or ATi’s fault that you purchased the wrong card for what you are wanting to do.

I think that we can’t really speculate on how much effort either company put into optimization. HL2 could be using a featureset much more advanced than that used in Doom 3. After all, there’s more than just floating point precision pixel shaders in DX9. Perhaps many of the items in HL2’s featureset (other than PS2.0) are those that the GeforceFX is weak in. If someone could list the DX9-level features that each are using, we could perhaps evaluate the performance of both on either IHV’s video cards to determine if the performance levels shown by Valve are reasonable.

FUD = “fear uncertainty doubt”, it refers to one competitor trying to spread fear uncertainty and doubt about another’s product or business typically through rumour mongering, vague assertions and disparraging statements of all sorts. Generally it refers to unsubstantiated, unfair and mostly underhanded practices that lack real merit or impartiality.

This does not seem like FUD to me, it seems like an independent developer with a lot of credibility presenting their experiences, that it puts NVIDIA in a bad light doesn’t mean it is FUD. Sometimes unfomfortable observations are just uncomfortable truths. Sure from ATI’s perspective it is quite a coup, they paid a truck load of cash in their half-life deal and it looks like they are getting their money’s worth, but that still doesn’t mean the presentation was a pure FUD. There’s a bit too much substance and valve has a bit too much credibility for that. I do find a couple of things in it a bit surprising so I’m reminded that sometimes people have axes to grind etc, and what you take away from this really depends on how you reguard platform specific optimizations and whether you trust a credible developer at an ATI conference who’s received been a wad of cash from ATI.

FUD would be some marketing droid at ATI suggesting that NVIDIA schedules were slipping on NV40 and they were struggling to fund their engineering department as their key technologists jumped ship due to their deflated stock prices and treadmill project cycles. Of course I just made all that up, none of it is true, and it would be an example of FUD if anyone posted it in earnest.

[This message has been edited by dorbie (edited 09-11-2003).]

More of a reason for me to stick with OpenGL.

-SirKnight

Originally posted by Korval:
Hence, no matter what Valve does, they aren’t getting good performance out of an FX without writing nVidia-specific code. For OpenGL, there is a potential solution. For D3D, there is none.

You’re the one who bought the FX5900-Ultra without thinking through the possible ramifications on the fragment shader end. It is unfortunate (especially considering what you paid for it), but you had the opportunity to weigh the avaliable evidence about its cross-platform fragment shading capacity. It’s not our, or Valve’s, or ATi’s fault that you purchased the wrong card for what you are wanting to do.

Actually I bought it based on the performance figures from the DOOM3 benchmarks. But apparently I not only bought the wrong card, I talked Valve into ditching GL and concentrating on DX.

mmshls, you are not my mom, I don’t know you, you have no right to call me names and expect me not to be offended. If you meant to offend me, admit it, or STFU.

Maybe you should have named the post “mmshls is a troll!!111”

I’m so glad that a call to elevate the tone of discussion is met with such an intelligent response. Maybe you should just post a picture of your ******* and save us all some time getting to whatever point you were trying to make.

http://www.gamersdepot.com/hardware/video_cards/ati_vs_nvidia/dx9_desktop/HL2_benchmarks/003.htm

Lets wait for Det 50

Nakoruru, nobody called you an nvidiot, you posted AFTER those comments and assumed they were talking about you, hillarious.

Making negative comparrisons about a graphics card is not a personal attack, and nvidiot is a great term for people who lack objectivity and are excessively pro NVIDIA. I just wish we had an equivalent term for their counterparts in the ATI camp, I think it’s fanATIc but it doesn’t have the same ring. This thread wasn’t troll, it’s a newsworthy event.

You can prefer NVIDIA without being an nvidiot, just don’t go around throwing pro NVDIA tantrums in place or reasoned debate and nobody will think you’re one.

W.r.t. NVIDIA’s response, it sounds very reasonable, and refreshingly frank & honest. The generic shader optimizer in Rel.50 sounds great (I suspected related work seeing the 3DMark updated PS2.0 results alongside NVIDIAs policy changes), if that is the outcome of the Futuremark debacle then it’s a great one for everyone. Gabe may end up with egg on his face over this if he has the unreleased drivers and is pushing the old numbers, but nobody is ever going to come out of something like this untarnished. It’s a snapshot in time in a dynamic situation. I’m glad to hear that NVIDIA can narrow this gap, it’s important that they do for all of us.

[This message has been edited by dorbie (edited 09-12-2003).]

what i find rather interesting actually is…

matrox parhelia. everyone knows its a rather slow and bad card. everyone knows it does have good features anyways, espencially the displacementmapping, as well as some other stuff.

but it is rather simple: its a slow card, not worth the money. matrox made a “bad card”

why does it hurt that much to accept this happened to nvidia? the gfFX DOES indeed have good sides. it runs very well in dx8 class applications for example. it has no that good dx9 support. it has no good ARB opengl support. it does have tons of own extensions to expose its features. and its good that gl can give such access.

but its anyways a card that performs just bad in a standard application. be it a synthetic or game benchmark, or an own coded demo.

of course, nvidia tried to hide that, and tried it with marketing and cheating, as well as with REAL optimisation in the drivers. we have to see how good det50 really works, i wish them the best at least.

but i think we should just accept that the gfFX is not a such an amazing card. then, all the fanboycalling and crying and flamewars could simply stop. nobody flames around for matrox.

nvidia can do great next time. we’ll see.