Nvidia or ATI for OpenGL

I know somebody will probably say, “research this yourself”, but I would like to know if anybody has their own opinion on which card is better for OpenGL programming, Nvidia or ATI, their latest cards by the way. I know ATI is ‘supposedly’ better for DirectX 9 because of HalfLife2 news, but Nvidia seems to support OpenGL more by showing more demos on their site than ATI. Any opinions on this?

Have you at least searched this forum for discussions on both cards?

nVidia tends to have a lot of their own proprietary extensions, where ATI has more ARB extensions. I think ATI is more of a standard than nVidia because of that.

nVidia does have more programmibility on the FX part, but it doesn’t really have the speed unless you want to toss precision out the window. ATI on the other hand has the speed, but you don’t get the long shader programs (and a slightly less powerful set of instructions).

Personally I would wait for the next gen of chips to come out, and then decide as to which card/chip you want.

I have a nvidia FX 5900 ulta. The card works great much better then my ATI card on my laptop(ATI Radeon 7500 32meg compared to my old nvidia geforce 2 mx it ran like ****). I like nvidia because they do support OpenGL alot and for there continous *nix support. In all the reviews I saw the FX was much faster then ATI with OpenGL, it was mixed for DX.

I have a Radeon 9000 at work. And a GFFX 5900 Ultra at home. Both work great. For Windows, it is a toss up. Purchase the best card you can afford. Try to get one of each for testing purposes.

nVidia’s fan and extra power dongle might annoy you. I don’t mind so much.

On Linux (and other *nix OSes) go with nVidia. The driver support is proven. ATI did just release Linux drivers. But I’d wait 6 months to a year and take a wait and see approach.

It is not true that ATI supports significantly more more ARB extensions than nVidia. If you take the extensions from nVidia’s OpenGL spec and from ATI’s webpage then you see that nVidia supports 24 and ATI supports 25. Hardly a huge enough advantage to make them ‘more standard’

At the same time, nVidia has 34 NV extensions and ATI only has 15. Since nVidia supports nearly as many ARB extensions as ATI, I would consider having twice as many extensions from nVidia to be very generous (they are letting you get at the actual hardward more than ATI).

If the situation was different, i.e., nVidia did not support ARB extensions just as much as ATI, then I would believe the myth that nVidia is ‘glide-like’ and propietary. But if all you base it on is the number of extensions, then that simply is not true.

Of course, things are a little more complicated than just how many extensions are supported. It is true that nVidia does not perform as well using ARB_fragment_program as ATI does. But, I’m only pointing out that you cannot base your argument on the number of extensions, because there isn’t really any difference.

Reviews’ OpenGL game performance results are dependant on a number of factors and I don’t think that game benchmarks can really be used to determine what video card one should get for development. My own opinion is that one’s choice must depend on what one expects to be doing with the video card. For that, the specific performance for individual features is far more relevant.

If one wants to do some heavy fragment shader work in a real-time, ATI is the one to go for simply due to speed. For experimental or non-real-time applications, GeforceFXs certainly are very good simply due having less limitations as far as shader instructions and dependant texture reads go. Also, speed doesn’t -really- matter in those instances since you’re already expecting frame rendering to take a long time.

Maybe he meant to say that Ati is more gl compliant. Witness the crossbar issue under nv. Or clamp to edge nv fiasco.

i prefer ati because you cannot only create nice fragment programs with floatingpoint, but you can also have nice inputvalues as floats from textures, and render to outputs as floats to textures…

this, with not much restrictions, allows for great move over to full hdr lighting instead of 0…1.

if you want to be future proof, i’d suggest you an ati. why? because even carmack sais so fx is great for dx8, but for not much more…

about the glide thing… well… if nvidia would get the ARB exts to run WELL and only have the nv exts as EXTENSIONS, then, i would say no. currently, you actually need the nv exts if you want acceptable performance, and the arb exts are just in there to claim they have gl1.4 or what ever support. they are very bad performing and rather useless.

then again, i could be ati-biased because my 9700 still rocks after over a year… and can still fight up with the newest fx cards and beat them in hdr situations… both in features and speed.

I have a nvidia FX 5900 ulta. The card works great much better then my ATI card on my laptop(ATI Radeon 7500 32meg compared to my old nvidia geforce 2 mx it ran like ****).

I’m not quite sure what you’re comparing here. If an FX5900 Ultra can’t take out an Radeon 7500, which is 2.5 years older, then nVidia never deserves to sell another card again; it’s expected that cards release long after current cards can run faster. As for the 7500 vs the 2MX, that’s an old argument, and does not speak to the nature of current hardware.

In all the reviews I saw the FX was much faster then ATI with OpenGL, it was mixed for DX.

On the other hand, for games that actually test DX9/ARB_fp functionality, the clear winner in every test was the ATi card.

At the same time, nVidia has 34 NV extensions and ATI only has 15. Since nVidia supports nearly as many ARB extensions as ATI, I would consider having twice as many extensions from nVidia to be very generous (they are letting you get at the actual hardward more than ATI).

But ATi’s extensions tend to be better.

Take ATI_texture_float vs. NV_float_buffer. Same basic functionality (floating-point textures), but the ATi extension offers more power. The ATi extension allows for any kind of floating-point textures (1D, 2D, Cube, etc), while the NV version is limitted to only NV_texture_rectangle. The ATi one offers all of the data formats that regular textures do (intensity, luminance, etc), while nVidia’s only offers RGB and RGBA.

Also, a lot of nVidia’s extensions relate to vertex programs (which are depricated by ARB_vp except for NV_vp2), VAR (depricated by VBO), or older functionality that is superceeded by newer functionality. The bulk of nVidia’s extension are for getting at older hardware.

Admittedly, most of ATi’s extensions are depricated by VBO too, but the rest are almost all new functionality for their current generation of graphics chips.

i prefer ati because you cannot only create nice fragment programs with floatingpoint, but you can also have nice inputvalues as floats from textures, and render to outputs as floats to textures…

For the sake of fairness, I have to point out that an nVidia card can do much of the above. It doesn’t handle float textures nearly as well (as I pointed out before), but it can do them. Just not quite as fast as an equivalent ATi card.

because even carmack sais so [/qutoe]

Oh, there’s a great reason to do something. Because Carmack said so.

[quote]if nvidia would get the ARB exts to run WELL and only have the nv exts as EXTENSIONS, then, i would say no.

Then say no. Benchmarks with the most recent versions of the Det50’s show that the gap between the two has narrowed considerably. Granted, it hasn’t gone away, but the high-end nVidia cards are performing respectably in what appear to be floating-point situations. And they do it without a dramatic loss in image quality (see www.anandtech.com for comparison).

the statement with carmack was just there because of one reason:

people buy gfFX because it performs GREAT in doom3, and they think doom3 is THE FUTURE of games.

carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders. for any new style game, gfFX perform very bad even in his own tests, and it is, for him, rather dissapointing to see that.

yes you can do floatingpoint textures on gfFX, but, as yet mentoyed, not at all as well as on ati cards. there, we can have simply float textures, when ever we want, where ever we want (at least i never got any problem yet with any sort of float texture…). on nvidia, it can only be done with nvidia proprietary NV_texture_rectangles. and i’m not even sure how good it works with ARB_fragment_program. it does, though, with NV_fp proprietary, of course…

oh, and i’m not sure about the det50… imho, the image quality is rather bad now… and still there is a gap.

i prefer to run at 24bit floats than to not know if i now run at 32bit floats or 16bit floats or what ever…

well, i can’t trust nvidia drivers anymore anyways… too much went wrong the last months, and they still haven’t officially ever stated they where wrong and they will change it.

On the other hand, for games that actually test DX9/ARB_fp functionality, the clear winner in every test was the ATi card

i just remember the one comming in here crying that his 5200 has only some fps in the humus demos, while the one year old 9700 has 50 (according to the page. i can support these numbers ).
the nvidia marketing part is GREAT… it would be bether if the nvidia hwdev part would be that GREAT.

This won’t make your decision any easier 'cause it’s the other way round, but anyway:

NVidia cards perform very well (=pretty much the same speed) under OpenGL compared to DirectX while there are some people claiming that ATI cards generally perform better under DirectX (I became one of those last time I checked).
If someone can disprove that statement please let me know!

Originally posted by davepermen:
carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders. for any new style game, gfFX perform very bad even in his own tests, and it is, for him, rather dissapointing to see that.

And where do you get that info from ? I haven’t seen a Carmack’s .plan for a while, and by the time I read the last one I remember that the GFFX cards weren’t even in stores.

i just remember the one comming in here crying that his 5200 has only some fps in the humus demos, while the one year old 9700 has 50 (according to the page. i can support these numbers ).

You compare FX5200 with a 9700 ?
Fine, why don’t you compare it with a card that is meant to be equivalent, say the 9000 which is not even capable of fragment programmability

the nvidia marketing part is GREAT… it would be bether if the nvidia hwdev part would be that GREAT.

OOops I think that is not a line to post as I would consider it as flaming. You will be bann… huh I mean nothing, there is not even moderators in this board

carmack stated himself that, yes, the gfFX rocks in doom3, but that just because doom3 does not need to use any advanced new features (dx9 style features, that is), nor the high quality of floatingpoint shaders.

You are aware, of course, that the only advanced features that the FX’s doesn’t perform as fast as the 9500+'s involve floating point. There are many other DX9 features besides floating point, and they are arguably more useful than floating point too.

, and i’m not sure about the det50… imho, the image quality is rather bad now

Your opinion is wrong and likely based on outdated information. The image comparison on AnandTech shows, at worst, a negligible difference between the ATi card and the Det50’s on the nVidia one. There’s some difference in anisotropic filtering quality, but that’s to be expected when you’re working with two different hardware implementations. And the actual differences are almost all due to actual driver bugs (det50’s aren’t live yet).

and still there is a gap.

When you benchmark 2 cards, one of them wins, and one of them loses. The “gap” you are refering to is no longer a full-fledged rout, but can be explained as simply a performance difference for certain kinds of applications.

A similar gap exists if you benchmark an nVidia card against an ATi card in a stencil-shadow-heavy game, except that it points in nVidia’s favor.

what other useful things came with dx9 cards than floatingpoint over the whole pipeline? bether pixelshaders, wich both have, bether vertexprograms, wich could even get emulated rasonably well. the rest was all yet there (not in dx8, but in hw).

the only difference is now floatingpoint everywhere. and there, fx lacks a lot of features (very restricted floatingpoint texture support), and a lot of performance. they run well, though, in dx8 apps.

i don’t know of anything else special that dx9 brought to the hw…

yes i compare teh fx5200 with the 9700… just for the fun of it. nvidia marketing gets people believe here that they can buy a cheap fx5200 and beat my old 9700. thats why i compare them
oh, and the 9000 is quite capable of fragment programability, just only ps1.4, but at least those rather well…

well, i have to read up again on anandtech then… hm… all i’ve read up and seen till now is VERY BAD for anything 50.xx and higher… i’ll recheck…

oh, and, korval, wasn’t you the one who always fought for fixedpoint==useful in the other threads as well? i can still not see any use for this, it can be a nice speedup if you WANT TO CARE, but first of all, the card should perform well in the general case of a general situation, and there in general data should be floats.

about the carmack statement. this was a mail, i think it got posted on beyond3d, not sure anymore where… i’ll look around for it. they asked about the hl2 fiasko, and in contrary why does doom3 run so well then while other dx9 games all don’t (same for opengl games with ARB_fp…). his answer was, that doom3 runs bad with ARB_fp, too, on fx, but he doesn’t need the ARB_fp (yes, korval, he does not need floats for doom3). but in general, more future designed apps, will require floats, for hdr and similar. doom3 is NOT a dx9/ARB_fp app, its an old style app, designed merely for dx8 capable cards.

thats about his statement, and explanation why the fx performs so exceptionally well in doom3, compared to all other new games.

and he stated as well, that the precicion difference is visible/measurable, but it doesn’t hurt much in the case of doom3.

Anand is wrong. Det 52.14 forces ‘brilinear’ filtering in DirectX Graphics. You can’t get trilinear filtering - at all.

For those who can read German, this is what I’m talking about. An English version is in the works.

I trust the guy who wrote that article - a lot more than I trust Anand these days.

edit: OpenGL texture filtering is okay.

[This message has been edited by zeckensack (edited 10-09-2003).]

That sounds more like it would be a bug, much like the infamous 16-bit texture opengl bug that was in the Radeon drivers for a few releases. That has been fixed in Cat 3.8 (released today) tho

For dev work I would have BOTH nVidia and ATI cards, and I do. Why do you have to make a choice? Unless you can force your customers to use one or the other.

If I had to choose one it would be nVidia because even if ‘you have to use NV extensions to get good performance’, well, you HAVE to because thats simply the way things are and I would not want to release a product that half my customers are going to think sucks.

That goes both ways, the only way to get good performance on both is to write your product for both. Don’t write it for ATI pretending that it is ‘standard’ and then complain about how it doesn’t work on well nVidia cards like Valve did.

If you are some college kid or hobbiest looking to get into OGL programming then I do not think it really matters at all, get the one that will play the games you like better when you aren’t programming . Its not like you are going to be taking each card to the limit. You are not John Carmack. By the time you run into any real issues with your card, the next generation of cards will be out and all the issues will be different.

Now how on earth can this be a bug? I beg to differ …

NVIDIA designed ‘brilinear’ filtering into the FX series. They’ve done so on purpose, clearly. NV2x can’t do it, no other chip on the market can do it. And now they are using it.

It improves performance at the expense of quality. Nothing more, nothing less. No graphics API known to man wants this type of filtering, yet there it is, after some explicit silicon redesign effort. And again the apologetics come along and say it’s a bug. Sheesh.

Regarding ATI’s 16 bit textures, sure, it was a regression. You could still get 32 bit textures on Cat 3.7, you just had to explicitly request them. Default texture depth is specified very loosely in OpenGL, so it wasn’t even a spec violation.

[This message has been edited by zeckensack (edited 10-09-2003).]

Concerning nVidia’s clamping behavior and crossbar. nVidia cards support texture crossbar, there is just a slight difference in how invalid texture stages are handled. Is it really such a big deal (serious question, not rhetorical). Also, you can enable conformant clamping behavior in the control panel. It is like that because come games rely on the incorrect behavior to look correct.

If anyone was looking to call nVidia non-conformant, I think they need to look a little deeper than those two issues.

Also, as a developer, I do not think I am too worried about nVidia’s optimizations for games like UT2003 (i.e., brilinear filtering) because it does not effect MY application. If I request trilinear on an FX card in my own program, I get trilinear filtering right? (again, a serious question).