ati or nvidia for maximum opengl functionality?

Please don’t dismiss this topic as another war starter, because that is the last of my intentions. Recently I am starting to realize that NVidia is offering a wider range of (non)core OpenGL functionality than ATI. In general their cards support NPOT 3D textures with borders, bigger framebuffers, etc. Perhaps there is more, perhaps ATI offers something NVidia doesn’t fully support. What I want to know is whether in your oppinion it is better to choose NVidia over ATI for serious OpenGL programming. I just seem to bump into more and more ATI limitations with my x800 card, and the situation doesn’t seem to improve with newer cards.

ATi will be coming out with a card to compete with G80… eventually. Probably relatively early next year. Until then, nVidia’s your best bet.

That being said, an X800 isn’t exactly cream-of-the-crop with regard to what ATi’s technology can do.

if your os is linux- or if you might ever want to try it- you should choose nvidia. nvidia’s linux opengl driver is very stable and easy to install.

Nvidia support of OpenGL is more flexible as ATI’s, both in matters of stability (arguable) and in functionality. You don’t need to get a G80, with Nv40 you still get everything you might want (exept geometry shaders). ATI cards still seem a bit faster… For an OpenGL programmer, Nvidia is a better choise.

Generally speaking (though it’s not always true) nvidia cards are more flexible and support more new features with good performance while on ATI cards new features are implemented slightly later than nvidia, but they do generally have a slightly higher performance in older graphical features.
This makes nvidia cards perform better in games like Doom3 and ATI is faster in HL2 and DX8 class games (with comparable cards that is).

so for developers i would say that the Nvidia cards are slightly better since you get to play with the newer features earlier.
Though the difference is not that great.

I’ve seen a lot of limit on ATI card. For example with FBO, you must declared depth buffer target as a render buffer and not as a texture(i mean simple color attachment with depth24 type) but, on nVidia, you can do what you want.
I’ve seen a lot of other things that work well on nVidia card but not on ATI card…

In my opinion nVidia is the only choice when it comes to OpenGL. OpenGL support on ATI cards is very buggy and some things are simply not working. This hasn´t changed in the last two years and I have very little hope this will change in the near future.

In my opinion it all changes with every generation of GPU’s:

ATI was the first to put some programmable fragment processing where NVIDIA’s GeForce 3/4 had only some advanced combiners.
Also, Radeons 9500-9800 were better (and faster) than GeForce FX. On Radeon you could use 32-bit floats in shaders and you almost had NPOT textures.

With Radeon X / GeForce 6 the applause goes to NVIDIA. When using fixed functionality Radeon is just a bit faster, but when you use a lot of shaders GeForce beats Radeon and you have shader model 3.0 on GeForce 6. For a gamer ATI could be a better choice since only few games make a good use of ShaderModel 3.0, but for developer there is no question here.

Next generation is radeon X1k and GeForce 7 - not much new features in GeForce, but vertex and fragment shaders were optimized and this GPU became even faster. ATI, however did a lot of work to give us a shader model 3.0 GPU that can do HDR+FSAA. Also, ATI’s render to VB seems to be a great thing. So, this time ATI had just a bit more interesting features. Still, NVIDIA’s GPU seems to be better optimized for complex shaders.

We’ll have to wait to see what next generation brings.

NVIDIA has one advantage from developer’s point of view - NVPerfKit and extensions like NV_fence or EXT_timer_query. ATI Has it’s own performance analysis tools, but these are v1.0 beta.

As for the drivers I think NVIDIA is a bit better on Windows and much better on Linux.

If you want to develop for fun, then it could be either Radeon X1k or GeForce 7. But if you’re serious then you’ll need both. I have 7800 with Athlon64 3200 and in next room there’s X850 with Athlon64 3000, so I have a good situation for testing on ATI / NVIDIA / Shader Model 2.0 / 3.0 with rest of the system almost the same. I can allready tell you that it would be IMPOSSIBLE to create any application using GLSL that should run on both Radeon and GeForce without having regular access to both since neither ATI nor NVIDIA are consistent with GLSL specs. Program written according to GLSL specs is unlikely to work before you bypass driver limitations.

Regarding developer’s point of view it seems to be 7:0 in favor of NVidia. Now I’m waiting for my NVidia 7600GT, which again is not the best card one can get this days, but I expect to be much less restrained about what I can achieve with it.

Regarding developer’s point of view it seems to be 7:0 in favor of NVidia.
How do you figure that? Especially with nVidia’s cards in the FX-era being pretty crappy (and later than the 9xxx line).

Originally posted by Korval:
[quote]Regarding developer’s point of view it seems to be 7:0 in favor of NVidia.
How do you figure that? Especially with nVidia’s cards in the FX-era being pretty crappy (and later than the 9xxx line).
[/QUOTE]I figured it out by counting the above replies that bias toward NVidia.

I am not saying NVidia is generally better than ATI, my 5 latest cards are all ATI’s and I am very satisfied with them. I changed to ATI because the Detonator drivers were pretty unreliable back then. But as someone stated, one should have them both to be able to test the newest things on NVidia and stable functionality on ATI.

Since this is OpenGL forum all that was said applies to OpenGL, if we started talking about DirectX, the situation could be reversed. I don’t know.

I also wouldn’t like to get on Korval’s black list, him being a very helpful adviser :slight_smile: .

Regarding developer’s point of view it seems to be 7:0 in favor of NVidia.
I’ve counted 4:0 - 3 posts (including mine) didn’t give ultimate answer.
My ranking when comes to available functionality (and that’s what your original question was) would look like this:
2.Radeon X1k
3.GeForce 6 / 7
4.Radeon X
There is little functional difference between 2 and 3.
NVIDIA gets a bit better score for performance analysis tools and probably less buggy drivers.

And don’t worry about getting on someone’s black list. I haven’t noticed anyone acting like he has such list, so as long as you don’t write anything offending, then you’re always welcome.