GeForce FX

Ok, i just got done looking at the specs and the previews for this card. Let me just say…I WANT ONE NOW!!! But…i looked around for the release date and i couldn’t find one at all. Is it already out?

Nope. And don’t expect them untill sometime in the first 3 or so months of next year.

-SirKnight

The press release said “February” AFAIR.

However, looking at the specs, I don’t see anything that this card can do more of than the Radeon 9700, which has been available for a while now.

The nVIDIA proprietary vertex and fragment program extensions are more powerful than what the 9700 drivers currently expose, though. Is that worth the wait? You decide. Maybe just get both? :slight_smile:

Originally posted by jwatte:
[b]The press release said “February” AFAIR.

However, looking at the specs, I don’t see anything that this card can do more of than the Radeon 9700, which has been available for a while now.

The nVIDIA proprietary vertex and fragment program extensions are more powerful than what the 9700 drivers currently expose, though. Is that worth the wait? You decide. Maybe just get both? :-)[/b]

You kinda contradict yourself there m8
GeForce FX is more powerful than R9700pro, make no mistake about that! It offers several significant advantages such as dynamic flow control in VS, opposed to static flow control offered by R300, a conditional mechanism in PS, longer instructions in PS and an overall better organization of the architecture (i saw the charts), so NV30 is indeed quite a bit more powerful…

The question remaining is whether these advantages be utilized in some way by anyone else, other than OpenGL demo writers (as most people around here are)?

The vertex and fragment programs of the GeForceFX are more powerful. However, you can implement the same effects on the 9700, just not in the same way. With high level shading languages that compile into multiple passes this will be transparent to the programmer. In the end it only boils down to which is faster ( of which I’m sure the NV30 is ).

HI!
This card has been sent to the reviewers at November. The mass production of the chip will perform at December, 2002. You may expect the card in next Feb 2003.

Mishuk

I’m wondering why 0.13 basis didn’t removed additional heat & frequency hasn’t been rised much?

Originally posted by M/\dm/
:
I’m wondering why 0.13 basis didn’t removed additional heat & frequency hasn’t been rised much?

Because there’s about twice the number of transistors on NV30 than previous NVIDIA products.

Then they should work at least 2x faster GF4TI6 300MHZ core GFX 500MHZ core, I think with cooler like that I could get 700MHZ from my GF2GTS

[This message has been edited by M/\dm/
(edited 12-16-2002).]

Originally posted by nutball:
Because there’s about twice the number of transistors on NV30 than previous NVIDIA products.

On the other hand, if you compare to a comparable sized chip like the R300, it’s built on .15 and don’t require a whole lot of cooling and runs at quite high frequencies too.

There is such a thing as designing a chip just to play the frequency game coughIntelcough**cough. Let’s not forget about the importance of CPI (Clocks-Per-Instruction). I don’t want to call anyone a newb, but the reason AMD adopted the Performance Rating system was to reach out to those who don’t know about these things and not to lie to anyone. On a similar topic, a benchmark is just a program. The ideal way to judge a piece of hardware would be to actually use it yourself.

CPI? Nope, these are graphics chips!

fillrate=number of pixel pipes*frequency
As long as every clock cycle one pixel pops out of everey pipe, it’ll be fine.
Talking CPI in this context is just nonsense … I’d rather call it bandwith constrained, if anything

[This message has been edited by zeckensack (edited 12-16-2002).]

I think Jonski’s point is correct, though slightly misdirected.

Of all the changes and improvements in the NV30, its bandwidth reducing capabilities don’t seem that much better than a 9700 Pro. Since that is where the bottleneck will, quite likely, hit anyway, there’s no reason to assume that NV30 is going to have a big performance lead over the 9700.

The big thing I see on the NV30 is the dependant texture reads. More data is available at more stages (both the vertex programs and fragment programs). Dependent reading is a pain. There is definately some sort of latency associated with it. Dependent reading of a texture means no prefetch essentially. I am sure they had to do a lot of work to hide the latency. Again you can’t simply cache all the textures. You will be going to memory eventually. I really would like to see how much of a hit dependent texture reads will cost you.

Devulon

Originally posted by Devulon:
[b]The big thing I see on the NV30 is the dependant texture reads. More data is available at more stages (both the vertex programs and fragment programs). Dependent reading is a pain. There is definately some sort of latency associated with it. Dependent reading of a texture means no prefetch essentially. I am sure they had to do a lot of work to hide the latency. Again you can’t simply cache all the textures. You will be going to memory eventually. I really would like to see how much of a hit dependent texture reads will cost you.

Devulon

[/b]

uhm, you can do them on radeon8500 (two reads per texture unit), or on the radeon9700, 32 reads… no problem. nothing new on the nv30, except read - count (because of the raw instruction count of the fragment program)

the performance of the gfFX is, compared to the raw, brute force speed (“1gig mem”,“500 mhz core”), really slow… it is nearly twice as fast than a radeon9700 simply because of these numbers. but the card isn’t. so the radeon9700 is more advanced in optimized hw… can’t wait for higher clocked radeons…

Originally posted by davepermen:
the performance of the gfFX is, compared to the raw, brute force speed (“1gig mem”,“500 mhz core”), really slow… it is nearly twice as fast than a radeon9700 simply because of these numbers. but the card isn’t. so the radeon9700 is more advanced in optimized hw… can’t wait for higher clocked radeons…

A little biased towards the ati since you own one arent we? How can you say that the 9700 is faster when you have no proof of it? Have you seen benchmarks of the gf fx? Do you have a gf fx? I don’t think so. I don’t mean to sound rude and no offence to you but making statements like this with out knowing the full details is kinda ignorant. How can anyone believe ATI saying their card is faster and NVIDIA’s is slower? It also goes the other way, how can one believe NVIDIA saying their card is faster than ATI’s? That can’t and shouldn’t be believed at all, of course they are going to say their own product is superior and Godly. But once people can get a hold of an GF FX we’ll then see who is the champion.

-SirKnight

[This message has been edited by SirKnight (edited 12-17-2002).]

http://www.digit-life.com/articles2/gffx/index3.html

It’s interesting that NVIDIA managed to realize texture fetching commands in a pixel processor without any delays. Even dependent texture fetching going one after another are fulfilled at a clock. This can give the GeForce FX a considerable advantage over the R300 in case of complex shaders.

And that’s not the only advantage GeForce FX has over R300 architecture wise!

Now, who was saying GeForce FX is inferior to R300?

[This message has been edited by alexsok (edited 12-17-2002).]

Originally posted by SirKnight:
[b]
A little biased towards the ati since you own one arent we? How can you say that the 9700 is faster when you have no proof of it? Have you seen benchmarks of the gf fx? Do you have a gf fx? I don’t think so. I don’t mean to sound rude and no offence to you but making statements like this with out knowing the full details is kinda ignorant. How can anyone believe ATI saying their card is faster and NVIDIA’s is slower? It also goes the other way, how can one believe NVIDIA saying their card is faster than ATI’s? That can’t and shouldn’t be believed at all, of course they are going to say their own product is superior and Godly. But once people can get a hold of an GF FX we’ll then see who is the champion.

-SirKnight

[This message has been edited by SirKnight (edited 12-17-2002).][/b]

you got it wrong
just reading the card main speed numbers, you would have to assume its over 2x as fast as the radeon. but it isn’t (that is a fact… its about 10 - 20% on average, that is a guess…). now if you clock a radeon to the same speed, it speeds up about linearly (that is proven), and voilà, you can outperform the gfFX before its out (those are guesses, as no real gfFX was yet in real hands…).

the gfFX benches till now are nvidia only releases, and only compare about gf4. i don’t think they are … wrong… but… possibly… a little biased… possibly…

i don’t see much use in the gfFX, as its very expensive, when its out in 1 or 2 months, but will not outperform (thats a guess) the radeon by much. the additional features are proprietary, so not really useful in dx9, nor in opengl if your main target is… users… and not gfFX users…

then again, you can code additional routines, to make use of the gfFX. i just dislike that proprietary extracoding…

oh, and… what i really dislike is the inability of the gfFX to have floatingpoint cubemaps… as i’m very much into cubic shadow mapping, floatingpoint cubemaps on gfFX would be very nice… there are workarounds, sure… still its a bit annoying…

anyways. the gfFX will be the fastest card when it will be out one day. it will as well be the most expensive card, and a card wich does not support a real standard (=> much of its power will not be really used till the extreme in most situations, say all dx9 games).
i see in price/performance currently the ati products to be the winners of the duell. and the additional “feature” of the ati, to only occupy one slot in the case makes it quite a bit more useable in small cases, when you need special cards in as well…

sure, i’m currently ati biased. after years of loving nvidia and supporting them, i found out that somehow, the ati way is more clean, more simple, more robust… and, it follows the gl standarts…

anyways, it will be a funny next episode, with the gfFX on the marked… at least, i’ve seen yet reallife™ pictures of it running, and crashing with bsod… hehe was fun

it is nearly twice as fast than a radeon9700 simply because of these numbers. but the card isn’t. so the radeon9700 is more advanced in optimized hw… can’t wait for higher clocked radeons…

You make it seem like clock speed doesn’t count somehow…“the GFFX is only faster because of it’s higher clock”. So what? Same is true of Pentium vs Athlon, but the best Pentiums are faster at almost everything than the best Athlons right now, so I would maintain that Intel chose a better method of getting speed, at least for now.

– Zeno

the gfFX benches till now are nvidia only releases, and only compare about gf4. i don’t think they are … wrong… but… possibly… a little biased… possibly…

You can bet your bottom it’s biased, hehe.

So the GeForce FX can’t fo floating point cubemaps eh? Well that does suck. The shadow mapping using the cube maps would turn out pretty darn good with the floating point format. Too bad, maybe next time.

anyways, it will be a funny next episode, with the gfFX on the marked… at least, i’ve seen yet reallife™ pictures of it running, and crashing with bsod… hehe was fun

Ya, LOL! I watched that video and at first was like, hey this thing is pretty cool. Then it crashed TWO times, wow…pure comedy. What a way to showcase your product to tons of people on TV eh? I bet that nvidia guy controlling the demos was trying very hard to keep a straight face and not cuss the thing out. It’s ashame that happened becuase from what I have read on message boards about that video, people were saying they were thinking of buying the GeForce FX, untill they saw that. Some will disreguard it as just early buggy drivers and will still get one and have faith, but others won’t which isn’t good for sales.

-SirKnight