quality / standard conformance of GL implementations

> ATI and microsoft working together are a developers worse enemy.

That’s funny. Is this due to ATI’s GL implementation? That would be a sign of very poor quality. Somebody used to say, there are two good hw/sw combinations: NVIDIA+GL+LINUX and ATI+DIRECTX+WINDOWS, but that seems to be a bit exaggerated, and the citation originates from GeforceFX times that are definitely over.

Another question: Is it a good idea to switch to Direct3D, concerning quality / predictability / performance? That would make us platform dependent, but that would be an option for lots of commercial projects.

NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance.
I wouldn’t go that far. Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn’t much of a problem. From nVidia’s perspective at least.

> One thing that I’ve experienced is that invalid programs were still working on the Radeon while nVidia and 3DLabs are less forgiving.

From my experience it’s the other way around, ie. that nvidia is more forgiving, ofcourse especially in relation to GLSL.

Well, apart from that I’ve just spent the last couple of days updating our nvidia support, meaning that I’ve switched from 9800pro to a 6800gt.

Regarding performance the 9800 is consistently faster even though it is running a bit slower path (cpu skinning of characters). I guess (haven’t investigated it further yet) that especially the per-batch cost is much lower in the ati driver. Could be interesting to compare the 5.3 catalyst with those from the pre-doom3 days… :wink:

Most issues were related to GLSL. On the positive side all our 4 existing nvidia bug workarounds could be removed with the 71.83 drivers, meaning that we are really down to zero nvidia issues, while our 2 current ati (uniform arrays and lightsource init) problems still exists in the latest beta.
Then we have the cgc compiler used for compiling glsl as well as cg etc. I takes about 0.3 second for a single shader compilation meaning generating lots of shader combinations simply isn’t possible (we had a 20 secs startup time because of that).
Another big problem with the cgc method is that all compiler errors are reported from the preprocessed glsl->cg code, meaning that the error message are more or less useless. As we know nvidia-glsl is much more forgiving than real glsl so don’t expect your shader code to on anything else if just tested in the nvidia driver (strict mode can be enabled in nv-emulate but that also produces lots of bogus warnings of the type “implicit cast from float to float”…).
So… developing glsl stuff that works on other hw than your own is currently most convenient on ati drivers - especially as nvidia’s compiler tools (the standalone cgc and nvshaderperf) works fine on non-nvidia hw for compiler-compatibility and performance analysis. On the other hand the ati compiler seems a good deal more buggy at the moment (so i guess it is dependent on how much compatibility you need at the moment)

Regarding driver crashes (not caused by agpx8 hw issues (or OC)) I have only seen them on forceware for the past year…

Originally posted by knackered:
[quote]Originally posted by Pentagram:
ATI: Develop on this board so you can find the bugs early and work around them, then a few days before the deadline quickly test on nvidia to confirm that it works.
When you say ‘develop on this board’ I assume you don’t mean run your development environment on it. In my experience, ati drivers crash while doing simple gui drawing, such as scrolling a source code view in visual studio…which isn’t very nice, seeing as though visual studio has a habbit of trashing your source code occasionally on system resets or blue screens. ATI and microsoft working together are a developers worse enemy.
[/QUOTE]And by contrast, i’ve had an ATI board since shortly after the 9700pro was released and I’ve had a few problems here and there over the years but non more so than my NV drivers before hand and they were all limited to games. Infact, since my last windows reinstall (due to me breaking windows) this machine has crashed the sum total of once since around the middle of last year and I’m not sure that was a hardware issue and might be related to the dodgy power setup in my room.

So, I’d argue there is no hard and fast rule about which is better, at best you can deal with general-ness at best and even that relies on something not being broken else where…

Originally posted by Korval:
[quote]NVIDIA is relatively good at stability and conformance, but not so good concerning predictability+performance.
I wouldn’t go that far. Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn’t much of a problem. From nVidia’s perspective at least.
[/QUOTE]Funny you should say that. A long while ago, I read a NVidia pdf that recommended how to load resources. They said something like first load texture, then VBOs, then shaders. It may not apply today.

As for the GL conformance test, it’s probably more about pixel precise rendering and having a certain level of driver quality, but I’m sure most of the responsibility lies on the IHV’s shoulders.

> Your only problems with performance predictibility are in areas that most people do not excercise: texture upload. Since people do not typically perform these kinds of actions in the middle of real-time applications, there isn’t much of a problem. From nVidia’s perspective at least.

So we have to accept that texture upload in real time apps is evil, I don’t agree with that. And it is contrary to everything we see in the commercials. They want to make us think, that graphics hardware is good for “streaming” textures (see PBO specification). Why do we get more and more bus bandwidth (with PCI-Express) if the driver is sooo limited, that’s absurd and annoying from my perspective.

Is it so complicated to get near the maximum bus bandwidth for uploads or downloads on current hardware? Or is it irrelevant for the manufacturers? What’s the reason? It’s an important topic for general purpose apps, to get good+predictable streaming performance.

– edit –
NVIDIA: why aren’t we able to use the bandwidth of PCI-Express (or even AGP), does the driver limit that? PCI-Express should give us 4GB/s and AGP8x should give us 2GB/s. I know that we cannot expect peak bandwidth, but, if the driver has a good day ;-), it’s transferring approximately 1GB/s, so where does the missing factor of 4 come from?