Nvidia or ATI for OpenGL

nakoruku, i don’t get you. if you would have to choose between the cards, you would take nvidia? why?

if you can choose between a card wich performs standard opengl (and ARB_fp IS standard opengl, even while new), well, and you can code for standard gl and it performs well, or a card wich can not run standard gl fast but you have to always fallback to proprietary extensions wich can and will die out today, tomorrow, or in some years, but definitely before standard opengl, why do you choose the proprietary one?

i’ve coded a lot on the gf2 with proprietary extensions, as the card was not nearly as usable in pixelshading with standard gl (similar to FX now, just at a higher level). i can not run any of my old apps now, thanks to proprietarity.

the apps i code for the 9700 now will work for ever (well, yes ), as opengl will last for ever (… again… well, yes…). it will run on FX cards as well, just not that well. but that is not MY fault. first, i want to make the thing working for now and the future, fast on cards that perform fast in opengl. and THEN i could still make proprietary optimisations, or try to map my stuff to lower-end hw like radeon8500+ and gf3+ (wich would be more useful than rewriting for a superoptimized lowquality fx version imho… i know more people owning gf3,gf4, than gfFX…)

tell me one reason to first support FX over standard opengl. and remember, you’re in the opengl forum, not on cgshaders.org

Originally posted by davepermen:
tell me one reason to first support FX over standard opengl.

You obviously don’t write OpenGL apps for a living. If you did, you would realize that you need your app to run well on many different cards, and hence writing multiple code paths is almost inevitable. If you can only afford to buy one card, it therefore makes sense to get an NVidia, since they can run the most different code paths on a single card. They also still cover the largest share of your potential user base.

– Tom

(Edit: spelling)

[This message has been edited by Tom Nuydens (edited 10-09-2003).]

Hey daveperman, I would say you are absolutely right if I was saying which card I would choose personally, or if I was writing something I wanted to keep for a long time, or if performance was the most important thing, but that is not the context.

I should have been more clear that if I was a commercial developer, and could only choose one card, it would be nVidia. Because if I want my game to perform will on all platforms that my customers run, then I HAVE TO use nVidia’s proprietary extensions, and therefore I need an nVidia card. I do not see performing well on all the cards most of my customers are expected to have to be an option, and if I think its too hard, then I’m just being lazy.

It is all about context, there is no single card which wins in all situations.

I would want to use an ATI if I had to choose just one for personal use, but then again, there seems to be a couple of things I can do using nVidia that I want to do privately, that ATI does not support yet. I’ll have to make sure before I can be more specific however.

I was just reiterating what John Carmack said about using the nVidia card in his dev machine because it could run the game in more ways, and was therefore more efficient for development (even if the ARB path was slower, that was not as important for development as it simply working).

If the ARB path was just as well performing as the NV path, then there would not even be any need to write the NV path, and all things would be equal again. I guess I’m flipping everything on its head and saying that as long as nVidia owns a large segment of the market, and as long as you have to use proprietary extensions to get the best performance, the developer needs to have an nVidia card in one of his machines :slight_smile:

It is not developers that determine what card they need, it is the customers.

I guess it is sort of a messed up situation.

So, I hope you ‘get me’ now.

i don’t know of anything else special that dx9 brought to the hw…

Nothing?

Off the top of my head, I see looping in vertex programs and real fragment programs that are beyond merely setting up some fixed-function state.

Note that everybody can use these features. Not everybody wants to use HDR and float-buffers; it isn’t appropriate for all rendering. If I’m doing some non-photorealistic rendering, HDR does nothing for me. But conditional branching in vp’s and good fp’s are still useful to me.

yes i compare teh fx5200 with the 9700… just for the fun of it.

OK, to balance your anti-nVidia stance, let’s compare the 9600Pro to the 5900FX. Oh, look, the 5900FX always wins. Let’s do it again. The 5900FX wins again.

ia marketing gets people believe here that they can buy a cheap fx5200 and beat my old 9700.

Don’t blame marketing for the uninformed populace. Anybody who thinks that a $100 card can be a $400 card is clearly uninformed and deserves what they get.

but in general, more future designed apps, will require floats, for hdr and similar.

Perhaps. However, the GeForceFX has a significant performance advantage in the case of rendering stencil shadow volumes. The FX’s lead in Doom3 is more than just being faster at DX8 tasks.

Any app that uses stencil shadows will be able to run through the volume rendering steps faster on an FX than an equivalent R300 card. This performance difference could make up the difference in using floating-point fragment programs.

Anand is wrong.

A picture is worth a thousand words. Consider that they have at least 10 image comparisons. The images don’t lie; the image quality difference is negligable. Certainly, the questionable behavior of earlier Det50’s (missing effects, improper lighting, etc) has been removed.

Det 52.14 forces ‘brilinear’ filtering in DirectX Graphics.

What is “brilinear” filtering?

Now how on earth can this be a bug?

Easily. Someone accidentally passed the hardware the wrong value when the user said “trilinear”. I’m sure we’ve all made similar bugs.

And again the apologetics come along and say it’s a bug.

How do you know that nVidia didn’t intend “brilinear” to be an image-quality replacement filtering technique for bilinear? Remember, the Det50’s aren’t final yet.

the apps i code for the 9700 now will work for ever (well, yes ), as opengl will last for ever (… again… well, yes…).

You have a great deal of faith in the longevity of the ARB_fp extension. Once glslang gets on-line, the only reason for an implementation to support ARB_fp is for legacy support. Indeed, I would imagine that latter implementations wouldn’t even bother to write a decent optimizing compiler for ARB_fp.

Korval,
I have never by pure coincidence designed any hacks that can’t be used for anything else besides unwanted performance/quality tradeoffs. Nope. If I design something new, that involves conscious work. A bug is an accident and it just doesn’t add up this way.

Last time I’ve seen something like this it was called mipmap dithering and clearly labelled as “faster while not as pretty as trilinear”. Needless to say, the implementors correctly remembered that they had it in the silicon for that purpose, and offered application control.

Brilinear is the coined term for that somewhere-in-between mix of bilinear and trilinear. An accidental, unconscious design choice, if you will.

You think it’s a benign quality enhancement to replace bilinear filtering? Give me a break. Bilinear is still bilinear on NVIDIA drivers.

Regarding Anand, I believe I’ve already addressed this. I have better sources at my disposal. People who know what they’re doing.

i can’t even find the anandtech paper… the anand-search only finds infos about detonator 4,5, and 6, and, uhm, yes, they are… rather old

I have never by pure coincidence designed any hacks that can’t be used for anything else besides unwanted performance/quality tradeoffs.

I’m not sure what this is in reference to. The existence of “Brilinear” filtering, or the potentially accidental use of it in D3D?

An accidental, unconscious design choice, if you will.

Why does it have to be “An accidental, unconscious design choice,” rather than a merely another filtering alternative?

You think it’s a benign quality enhancement to replace bilinear filtering?

If that is what they do with it, rather than replacing trilinear with it as their current drivers do, yes.

Regarding Anand, I believe I’ve already addressed this. I have better sources at my disposal.

That you believe this source to be better does not make it so. I, for one, cannot judge the veracity of your statement, as it has been many years since I’ve taken German.

I’ve been using AnandTech’s reviews for upwards of 3 years now, and they have never lead me astray. I have found their reasoning on various subjects to agree with my own on many occasions. Granted, I think they could do better, but I don’t have much to complain about.

i can’t even find the anandtech paper

No need to search; it’s on their main page. Their image quality comparisons are part of their benchmarking. Part 2 of their “Video Card Roundup”.

Besides internet and other sources of information says, I was on the FX side for some time. Then I took a look at the extension again and I figured out the new FX extensions are simply bad when compared to ATi counterparts.
So, by the easiness of development ATi wins in my opinion.

The other fact is that I spent some time on a FX5600 an the performance was very bad. So bad it was comparable to nvEmulate (only 4x times faster - ack)! The radeon 9600 runs the same app much faster.
Please take the above statement with a bit of salt since the application used was acually a prototype of a component I am going to use - it was not meant for benchmarking purposes so it may not reflect real performance - this is just my own experience.

I raccomanded an ATi to a friend just few weeks ago and after watching at it I must say it’s been performing well (there’s a small driver issue however). As for me, I am planning to buy an ATi in the next future.

Also, considering the price ATi has a win (this at least, applies in the region I live) since it’s somewhat less expensive (well, the FX5200 is really cheap but I fear this card cannot handle anything for real and I’m unable to find it in the stores anyway).

If installed base is the problem, then this becomes a very hard decision. I know some peoples which allows me to monitor sales in a bunch of stores here. They told me FXs are selling a lot (and now that Athlon 64 is here, I guess they will sell even more). Anyway, I would get an ATi.

It has been an hard truth for me discovering that the whole FX generation is simply so bad, not only performance wise but also feature wise.