All I can say is that if that is really the official nVidia response, it simply doesn’t (or at least shouldn’t) “hold water.” Fine,
leave the current behavior as a default assuming that the vast majority of users simply aren’t competent enough to configure the driver and simply want to play Doom, Quake, or whatever. For the professional market though, I simply cannot believe that it would be that hard to provide a means of switching to a blocking-wait method instead.
Besides, from what I’ve seen of gamer video card ownership polls, gamers aren’t exactly adopting nVidia’s latest hardware releases at a blistering rate. To see what I mean, take a look at the survey at:
http://valve.speakeasy.net/
The majority of these folks are still running GeForce2 MX! They certainly don’t seem to be all that worried about obtaining the highest possible frame rates… if they were, they’d be moving on to newer cards. Seems to me that the professional market is going to start playing a larger role here in demanding faster/better graphics cards. From what I’ve seen, we’re all “chomping at the bit,” for alternative high-end PC graphics solutions other than that provided by SGI. Not that I dislike SGI hardware, mind you, but in my experience SGIs stuff is all or nothing: If you want the graphics, you’ve got to buy all this other high-end hardware to go with it whether you need it or not. Thus, the poor showing of the SGI PCs from a few years back. When the nVidia guys broke off from SGI, I really hoped they were going to address this segment of the market. I understand that they initially went after the low-budget home-owner/gamer market, but it’s time to start looking longer term… Who’s going to buy the latest graphics hardware releases with the escalating initial release prices??? Looks to me like it will be the professional market… feel free to correct me if I’m wrong.
And while I’m on my soapbox… could we have 12-bit RGBA please??? Matrox has made the first step with 10-bit… presumably aimed at the DV crowd; however, 12-bit would be greatly appreciated! I know, I know, I ask for too much too soon! For now, I’ll settle for a blocking wait on the glXSwapBuffers() call!
My apologies for the rant…