I want a GeForce3

I want, infact I have to have a GeForce3 …
FSAA 4* faster than GeForce2 Ultra, full programmable vertex and pixel shaders alowing proper material appearances, 800 billion operations a second, 76 billion floating point operations a second (76 giga flops), high defination digital video, 3rd generation TnL, Lightspeed Memory Architecture™ allowing ultra fast ram speeds inreasing the power throughput and fillrates several times faster than GeForce2 Ultra. OPENGL 1.2 SUPPORT

This should beatmy TNT2 m64 !

who cares ?
There’s no point in getting this board right now since no game will use it’s capacities and they are already good boards (quality and speed) out there.

Wait 1 year at least.

Who cares? Obviously, alot of people on this forum. People trolling the opengl advanced coding message boards want the latest and greatest to play around with. This probably means writing GLUT demos as much (if not more) than playing Quake3 at ungodly frame rates.

I want one, but I am a poor grad student. :stuck_out_tongue:

Okay, from a dev perspective I agree that I would like to have one to play with.
(read experiment rather than play )

But as a gamer I don’t need it, and buying one will just be wasting money.
When some interesting games will come out with really nice effects if run on the NV20, I’ll think about buying one.

Still nothing on nVidia website, I hoped to see some informations about the hardware capabilities and some new tech demos, whitepaper and the like explaining how it works.
(And not only what I can do with it, if I know how it works, I’ll probably figure out what I can do)

An interesting piece of hardware anyway.

Ingenu -

For most developers, the NV20 chip is like a dream. Almost everyone on this board codes and likes to play with the latest tricks. For them, THAT is the best part of 3d graphics cards and makes the Geforce3 worth the price. Don’t forget that some of these capabilities were never available before and some were only available on $100k+ SGI machines. That alone makes the $600 price look meager…once it goes down to $400, it’ll be a bargain. I for one can’t wait to play with the pixel and vertex shaders as well as 3d texturemapping.

If all you do is play games, you’re right, you would be wasting your money at this point unless you really want FSAA. Wait for Doom 3

– Zeno

Some of you guys have a good point.But do keep in mind that,like most of you said,that this card is a developer’s dream.That means that game companies will start supporting all these new features within the next year or so.Doom3 is a good example.

What I want to know is,why did the Mac get it first ? And when is the PC version of the GeForce3 coming out?

600 US $ ??? 600 ??? That are 1200 Deutsche Mark, if not more. I thought I’d be done with 800 - 1000 DM. ****.

From about now on we should see a large influx of TnL games, most will be taking full advantage of its capability with a secondary rendering engine specialy designed for TnL. It will be a little longer before we see TnL only games although there are severl engines being worked on for TnL.

As well as developing being fun on a GeForce or greater card this can be considered a standard minimum technology in a couple of years. Just because by its imediate release there wouldn’t be many games taking advantage of it isn’t a problem. The more advanced gurus could quickly start making a small game using the vertex/pixel shaders etc. and it would look amazing compared to the comercial non Geforce3 game.

Going along the lines of “who cares nothing supports GeForce3s” is stupid. How many games do you know that take full advantage of a p4 1.5GHz with 256Mb ram? Not many, that doesn’t stop people going out and buying them when they could get a duron 800 with 128Mb ram for less than half the price

I’m just suprised that nobody has bothered to point out that 1 flop does NOT equal 1 floating point operation. It merely means the transistors can change state 76 billion times per second. This includes ALL operations, not just floating point operations. Hell, maybe I’m wrong, but electronics was one of the classes I DID pay attention in (in between getting really high at lunch and getting even higher after school)… but I’m a software guy, what do I know…

Yup, I think you may indeed be wrong. E.g. http://www.research.ibm.com/news/detail/architecture_fact.html

BwB – could it be a flip-flop you’re thinking of? Like a latch or register?

$600 is too much for most people (even those eager to play with new feature) to pay for a video card, especially since most developers haven’t even taken full advantage of the GeForce2’s rather formidable features.

I wonder how long it will take for GF3’s features to be used (Doom3 aside). As these boards get more and more powerful, the range of hardware a developer needs to support becomes wider and wider. Perhaps JC has the luxury to assume that everyone will have the latest and greatest when he comes out with his next state-of-the-art engine, but I don’t really think that is a fair sample.

On the other hand, every single X-Box developer is guaranteed the power of NV20 (or equivalent or better). That would be something to behold.


What I’d like to know is why the “H”, “E” double hockey sticks a GeForce3 is going to cost TWICE that of an Xbox when, reportedly, the Xbox is actually going to perform BETTER than a GeForce3-equiped PC!!! Not to mention the fact that you still have to have a PC to go WITH the GeForce3. But with the Xbox, which costs HALF that of a GeForce3 card (did I mention that already?) it comes with everything you need: a HD, network, GFX chip (nv20) etc… Everything but the TV. So what gives? Am I the only one who thinks this is totally messed up? So not only do I have to shell out ~$1000-$1500 on the PC itself, I still have to spend another $600 just for the freaking video card and I’ll still be running slower than if I’d just spent $300 for an Xbox!!!

  1. XBox won’t be out till fall at the earliest…compare prices then, not now.

  2. XBox has shared memory architecture that may not be as fast and thus as expensive as that on Graphics card. It doesn’t need to be as fast since the resolutions can’t be cranked as high.

  3. It is well known that these huge companies take losses on game console hardware expecting to make it up in software.

  4. MS will be buying in bulk (understatement). Even if they bought them now they would pay much less than $600 each.

– Zeno

Actually, you’re wrong on point #2. The shared memory architecture is actually going to lead to higher performance. That why I said that it’s actually going to be faster than on a PC. That’s because you don’t have to load information from main memory into the card. It’s all the same. So when doing a glLoadMatrix or a glGetFloatv, there’s no real performance hit because it all stays on-card.
2nd, you don’t have to explain to me the benefits of bulk purchasing. I fully understand basic economic principles… I just think it sucks that we’ll have to pay such an extreme sum of money just to get roughly equal performance with a console that costs half. I guess I’m just not used to having to shell out $600 for ANY single component for my computer! Not even CLOSE! Okay, okay, my Mitsubishi 19" did cost about $550… But I probably wouldn’t spend that much again. :slight_smile:

to point 2, its possible then to direclty access every texture and every vertex in the whole memory… means you ‘can’ create your textures then like this:

unsigned char* texture = malloc( width * height * 4 );

for( int i = 0; i < width * heigth * 4 )
texture[ i * 3 + 0 ] = rand()%256;
texture[ i * 3 + 1 ] = rand()%256;
texture[ i * 3 + 2 ] = rand()%256;

and then just call something like Bind(texture,width,heigth,RGBA);

and you can manipulate every pixel at every time

(of course opengl and dx do not support such functions, but theoretically, it is possible… ( wglAllocateNV is a first step, where you get a pointer onto nvidiaram, but you have to pass then always throught the cpu (and possibly the gpu) and the agp (or whatever u use…) to come to your memory… fast, but one mem for all is faster

There are a number of differences between the GF3 (NV20) and the XGPU (NV2A). They are different chips, designed for different uses. Although they have an almost identical feature set, it is not fair to put them up side by side in a direct comparison.

For one, jeez, wait until both chips are actually available until you start drawing premature conclusions!

  • Matt

Originally posted by Punchey:
Actually, you’re wrong on point #2. The shared memory architecture is actually going to lead to higher performance.

He wasnt saying the performance wouldnt be as good, he said the memory wouldnt be as fast. IIRC the XBOX going to use 266MHz DDR, whereas the memory on the GF3 is something like 400MHz DDR. Surely the 266 has got to be a bit cheaper to buy than the 400.

Won :
Wait 1 year at least.
But as a gamer I don’t need it

For “Doom 3” i’ll buy a geforce 3.
This is sad :slight_smile: but this is true :wink:

For most developers, the NV20 chip is like a dream.

Actually, i think many of us are FAR from truly exploit a geforce 2 and YES NV20 is a dream come true :wink:

Concerning the price, i hope geforce 3 won’t be that expensive since XBOX’s NV2A will help to cut cost per unit…