High CPU usage

All I can say is that if that is really the official nVidia response, it simply doesn’t (or at least shouldn’t) “hold water.” Fine,
leave the current behavior as a default assuming that the vast majority of users simply aren’t competent enough to configure the driver and simply want to play Doom, Quake, or whatever. For the professional market though, I simply cannot believe that it would be that hard to provide a means of switching to a blocking-wait method instead.
Besides, from what I’ve seen of gamer video card ownership polls, gamers aren’t exactly adopting nVidia’s latest hardware releases at a blistering rate. To see what I mean, take a look at the survey at:
http://valve.speakeasy.net/

The majority of these folks are still running GeForce2 MX! They certainly don’t seem to be all that worried about obtaining the highest possible frame rates… if they were, they’d be moving on to newer cards. Seems to me that the professional market is going to start playing a larger role here in demanding faster/better graphics cards. From what I’ve seen, we’re all “chomping at the bit,” for alternative high-end PC graphics solutions other than that provided by SGI. Not that I dislike SGI hardware, mind you, but in my experience SGIs stuff is all or nothing: If you want the graphics, you’ve got to buy all this other high-end hardware to go with it whether you need it or not. Thus, the poor showing of the SGI PCs from a few years back. When the nVidia guys broke off from SGI, I really hoped they were going to address this segment of the market. I understand that they initially went after the low-budget home-owner/gamer market, but it’s time to start looking longer term… Who’s going to buy the latest graphics hardware releases with the escalating initial release prices??? Looks to me like it will be the professional market… feel free to correct me if I’m wrong.

And while I’m on my soapbox… could we have 12-bit RGBA please??? Matrox has made the first step with 10-bit… presumably aimed at the DV crowd; however, 12-bit would be greatly appreciated! I know, I know, I ask for too much too soon! For now, I’ll settle for a blocking wait on the glXSwapBuffers() call!

My apologies for the rant…

in the NVIDIA_kernel-1.0-2960 directory, view the file os-interface.c, and look for ‘sgi’ and ‘swap’ (case insensitive)

Yeah,

That harkens back to the early days when SGI was threatening legal action against nVidia for running off with the company “secrets.” Best I recall, they settled out of court via some mutually beneficial agreement. Perhaps that’s the underlying problem… nVidia’s been neutered to keep them from whizzing on SGIs turf? If/when they become available, I bet SGI’s Intel-Linux based boxes will support blocking-wait VBLANK synchronization. Funny thing is that if memory serves me well, nVidia is supposed to provided the graphics solution for these platforms. If I’m correct, it will be a tell-tale sign of nVidia’s castration if SGIs have blocking-wait while standard Linux drivers don’t.

If/when SGI Intel/nVidia-based boxes become available?
There used to be such a line of products from SGI!
But if I’m not mistaken, SGI ceased it. http://www.gotocol.com/sgi.html
Can you find it on SGI’s own site? I can’t…

Oh, SGI had a PC…

I’ve actually got a SGI 320 sitting behind me with, I believe, two 450MHz processors and a gig of RAM. Seems like we purchased it somewhere around 1998/1999 time frame for some large chunk of change. Too bad it was saddled with the WindowsNT operating system. I think the big problem back then was that most people eye-balling these systems saw them as over-priced desktops, didn’t want to develop under NT, and didn’t trust/know enough about Linux system performance/support on these systems. I still don’t really know that much about them as that the processors are really too slow for my applications today.

I was also not the original user of this box as that it was purchased to run some third party visual database generation tools. I believe that the hopes were for this box to allow database generation without requiring the horsepower of an Onyx2 system. The folks actually used it for a couple of years before giving it up as too slow. I’m not certain whether they simply bought a new PC to replace it or have moved back to Onyx2 database development.

Either way, it is unfortunate that I did not take the time to investigate this box sooner. At the time, I was tied up with development on Onyx2 systems for which we had paying customers who could afford such systems. We still make use of Onyx2 systems today; however, much more emphasis has been placed on finding cheaper solutions to supplant such systems where possible.

When I got my hands on this SGI 320 a couple months ago, I had to crack it open, of course, to see what wonders lay inside. Basically, it appears to simply be a PC way ahead of its time. Looks like at a minimum, SGI implemented many of the next-generation standards… some of them the likes of which mainstream PCs are only now beginning to sport. Seeing it now, it’s actually quite an impressive machine.

I believe it was available with Linux as an option when new; however, nobody here had any desire to forage into the Linux world on this machine at that time. From what I remember, we found it questionable as to what level of support we’d have under Linux with this custom SGI solution. Remember, in those days, hardware acceleration under Linux was a gleam in our eyes… everyone wanted it, open source folks were trying to provide it, there was no DRI architecture, and nobody seemed to have it all together as a 3rd party solution like nVidia has today.

I was under the impression that the video sub-system in the SGI 320 was actually a SGI developed system. From my recollection, seems like nVidia was hardly a blip on the radar back when these systems hit the scene. If I’m mistaken, please point me to the blurb that explains otherwise. Did one of these systems say it had nVidia graphics hardware? If so, which one? I didn’t realize that SGI had gone that far with third party graphics hardware vendors and Linux.

Basically, I was under the impression that we were still waiting for the new Intel-nVidia-Linux architecture and that all this development that nVidia has been doing would be applied to that along with SGIs real-time multi-processor know-how. I know we’ve witnessed much technology flow-down from SGI to the open source Linux community, such as the SGI journaling file system, but I guess I’ve been under the assumption that there’s more going on behind the scenes that we’ve not yet seen and that will not necessarily be made part of open-source Linux.

Surely SGI is not going through all this trouble without a perceived future goal. Did I miss it already???

Ok, now you’ve done it! I’m confused!

Oh yeah,

Here’s the SGI URL:
http://www.sgi.com/products/legacy/intel.html

What confused you is that SGI had a previous line of Intel-based workstations which was quite different (the Visual Workstation line). It did have intel processor, but the motherboard chipset was completely SGI’s own, the graphics was in the ‘northbridge’ (no visible AGP card) and the architecture was unified memory. It also had a nice ‘onboard’ video IO interface. You can see the like of this machine today in nVidia’s nForce chipset. Unlike the SGI PC you see today, it didn’t have linux as an option, only WindowsNT and later WIndows2000. As a matter of fact, it had a license from MS to do a modified HAL to support the unique hardware architectuer.
That line was dropped too, when the simpler PC line came (which was based on VIA chipset)
This line (230, 320, etc.) was always based on nVidia-chip board. SGI made the board themselves, much in the same way that Creative, Visontek and many others do. They didn’t over-expose the fact that these machine’s graphics are nVidia based, but that’s what they are. nVidia was hardly a ‘blip on the radar’ back then. It was already in the GeForce2 era, and 3dfx was already in trouble. The linux drivers SGI used on these boxes was available on the same time for download from nVidia directly (like it is today). Contrary to SGI’s claims, I found no difference between the two. Actually, the chipset SGI chose (VIA) gave less AGP stability and performance with the nVidia board then the already-then-available Pentium4 chipset.

Are you confused? Its not my fault! Its SGI that’s confused you. Try re-thinking your statement about it 'going through trouble with perceived future goal…" :frowning:

[This message has been edited by Moshe Nissim (edited 08-12-2002).]

Originally posted by ScuzziOne:
[b]Oh yeah,

Here’s the SGI URL:
http://www.sgi.com/products/legacy/intel.html [/b]

Note the URL has “legacy” inside its path…

Note this too: http://support.sgi.com/nt/product/230/
download the “NVidia Display Flash BIOS for VR7”
unzip, and in the readme.txt see:

SGI Quadro2 Pro BIOS. Also called VPro VR7

Darned if you don’t learn something new every day! I did not realize that SGI and nVidia had already collaborated on previous boxes. Makes sense though. Ignoring my current projects, the last time I worked a Linux project both nVidia and 3dfx were pushing for accelerated hardware support under XFree86. XFree86 was working towards their loadable drivers release (4.0, I believe) but was not quite ready to release it yet. To my recollection, OpenGL hardware acceleration prior to modules was a bit qwerky… making use of Mesa along with a combination of other libraries from SGI, nVidia, and the open source community. 3dfx was in the fight at the time; however, in my personal opinion, 3dfx had hit “rude” tech support mode by this point and I wanted nothing to do with them. We did end up successfully using an nVidia card to complete the project. I then went back to Onyx2 land for some time until recently when I saw an opportunity to push Linux as a viable solution again. Many improvements have been made with loadable module support, etc.

Anyway, it was my opinion of earlier times that SGI was only half-heartedly chasing the Linux support. Within the last year we’ve seen a lot of signs that SGI’s investment in Linux is increasing. This is what leads me to believe that SGI will be putting out a much better supported Linux box with some level of SGI real-time capabilities approaching that of which we have on the higher end SGI systems such as the Onyx2. Again, this level of Linux support is purely my speculation… we may end up being sorely disappointed. I also had the impression that nVidia would be the primary graphics provider for this new effort… again, I could be wrong. Whatever the case, new systems are coming. Take a look at this press release:
http://www.sgi.com/newsroom/press_releases/2002/july/itanium2.html

I believe these are the future systems to which I was referring previously. Seems like nVidia is the most likely choice for graphics hardware given their buildup of accelerated hardware support under Linux. I believe we’ll see a more cohesive Linux support push here. Of course, only time will tell as it is my opinion that SGI has been having trouble deciding which way to run lately!

I think this anouncement is irrelevant to graphics solutions, nor to “PC-like architecture”. Note the keywords: “computing server” , “shared-memory system architecture”, etc.
Its along their roadmap to put Intel I64 CPUs into their Origin/Onyx system architecture. I think, though, that their original plan was to use Intel I64 instead of MIPS processors much earlier than now, but sadly Intel missed their schedule and SGI was stuck with under-invested (and under-developed, therefore under-performing) MIPS processors today

A lot of different topics in this thread.
About the 10-bit RGBA from Matrox do I think they use some bits from the alpha channel so you get 10+10+10+2 RGBA. I just read some stuff from John Carmack and he said the next step should be 64 bit RGBA. I guess that means 16+16+16+16. Probably needed in the future if many passes becomes common. Here is the slashdot link for Carmacks presentation
http://slashdot.org/articles/02/08/18/015214.shtml?tid=127

Folks,

Just to clarify, I work (with RedHat 7.3) on one of those “legacy” boxes. They did indeed sell them with linux, and they did have Nvidia cards, and a special hardware GL driver.

I still work with it (dual 800 xeon), and now the driver we use is the standard Nvidia/XFree86 driver, downloaded off the website.

At any rate, yes they did sell as linux boxen,

Tuna

PS,
When we were using the “special” driver, there were definite bugs(like not being able to access all 1GB of memory, only ~860M, crashes during GL programs, etc.). Definitely bleeding edge. I have no idea whether it supported the blanking you’re referring to.

[This message has been edited by Tuna (edited 08-24-2002).]

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.