GeForce3 - Why the bad results?

Yeah that’s exactly what I was thinking.I do have a GeForce2 GTS and it still serves me well.

Now,about the 3 points you listed,they’re not reason enough for me to throw away my GeForce2 and buy a Geforce3 for $500.

What I’m prolly saying is that hopefully nVidia and the card manufacturers will drop the price DRASTICALLY to about $350 (That’s as much as I’m willing to pay for a VIDEO card)

But OH to program a GeForce3 would be a big dream come true! But it would out to be a pretty expensive dream

Anyway,as with all nVidia cards,I tend to stay away from the first generation of cards that implement a big new feature (the GeForce3 in this case since it’s the first card to implement HW shaders.I stayed away from the GeForce256 and got the second generation GeForce,the GeForce2) I find it best to do it this way since I regard the first generation of cards as test subjects. And nVidia always does a great job with their second generation of cards.This prolly means that I’ll most likely wait till the GeForce4 comes out.That will most likely be my next video card (Unless nVidia starts coming out with HW accelerated holographic projector cards!)

It’s not at all misleading to talk about setup rates. Triangle setup rate is a technical term referring to the speed of a specific graphics pipeline stage. Other pipeline stages have speeds too, but those speeds are measured in different units. The vertex unit can process X million vertices per second, where X is a function of the transform/light state. The rasterizer can output X pixels per second (X usually a constant). The pixel pipelines can output X million fully-shaded pixels per second (dependent on many factors). And the memory interface can handle X gigabytes of data per second.

Performance numbers are performance numbers. That performance number is a true performance number. That GF3 can setup 40M triangles/sec is just like claiming that a PC2100 DDR memory system has 2.1 GB/s of memory bandwidth, or that a 32-bit, 33 MHz PCI bus has 133 MB/s of bandwidth, or that a 800 MHz CPU with 4 functional units could do as many as 3.2 billion operations per second.

  • Matt

Gecko, ELSA have cut the price of their GF3 from $550 to $400.
http://www.rivastation.com/index_e.htm

But I want a Creative Labs GeForce3 card

[This message has been edited by TheGecko (edited 04-07-2001).]

40 million tris !!
i thought i read the xbox can do 125 mil tris a second. whats up?

The XBox chip and the GF3 are not the same chip.

  • Matt

I know that right now there aren’t games to take advantage of the new features, but is also true that GeForce3 is the paradise for a game developer, or for an OpenGL programmer… I mean, we’re talking about features implemented on hardware that could sound like science-fiction some time ago! …well, on the PC at least, and sure at $500
That’s my point of view…

  • Royconejo.

Originally posted by mcraighead:
Performance numbers are performance numbers. That performance number is a true performance number

You’re starting to talk like my maths teacher…

Seriously speaking, I’m getting 15M dynamic, lit and textured triangles (GL_TRIANGLES, not triangle strips) on my GeForce2 MX, the same number I got on the ‘VAR-Fence’ nvidia demo… so that maybe the ‘maximum theoretical number’ could be reached by a real app.

  • Royconejo.

not the same chip obviously
let me get the maths straight
geforce3 40 mil polygons 500$
xbox 125 mil polygons 400$
i luv it
yes i realise ms are gonna sell the xbox at a big loss per unit but still you’ve gotta laugh