[OT]: Upgrading video card, would like opinions...

I have a question about a ATI 9700.
If i buy one, what can i do today with it using openGL, that i cannot do with a 8500.
Has ATI released any 9700 openGL extensions ?
I’ve never heard anything about that!

Oops, i’ve posted my previous message too fast, i’ve just found the thread :
'No New OGL Extensions. Waiting for DX9? ’
Every-thing is inside… Sorry.

Originally posted by john_at_kbs_is:
[b]Ok, I thinking of going with nvidia, because that’s what I currently have. I do have another question:

I want to start using vertex and fragment programs. I’ve found a lot of info on vertex programs, but if I look up fragment info on nvidias developers website all I get is texture and pixel shader demos. Can you only use fragment programs with cg? Or is there an opengl extension I should be looking for (tried looking for NV_fragment_program)?

Thanks…

John.[/b]

John,

You don’t need to use Cg to use vertex and fragment programs.

Currently shipping NVIDIA cards (except TNT) support NV_vertex_program and ARB_vertex_program. The extension specs can be found in the OpenGL extension registry (linked off opengl.org).

NV30 additionally supports NV_vertex_program2 and NV_fragment_program. The specs for these are published can be found at:
http://developer.nvidia.com

The latest “Detonator 40” driver (40.41) posted on NVIDIA’s web site includes “NV30 Emulate” support to help developers use the new functionality exposed via these extensions.

While Cg is not necessary to use these features, it may make them easier to use.

Before buying a new card, I will look if the Trident XP4 will really offer what it promises. A DX9 compatible card for 99 Euros/Dollars. Well certainly it won’t but I think it is still worth a look

Trident XP4??? I didn’t know Trident was still in business… Technically if I wrote new drivers that supported DX9 for my old Stealth 3D that would make it a DX9 card, but that still wouldn’t make it a good card…

Sorry for the NV_fragment_program post in the middle of this post (I double posted, my bad). I did get some info on this extension from the other post. I was under the impression that fragment programs where introduced with the gf3. This was an assumption based on the fact the gf3’s core is ‘programmable’. Well come to think of it my crappy little gf2’s core is ‘programmable’ (register combiners, env combine), which is not even remotely as flexible as a true fragment program.

So, on the gf3/4 I guess I’m stuck with using the same fragment extensions I’m currently using (register combiners, env combine) or is there something more flexible?

Thanks!

John.

Maybe this link is little OT but i think it interesting : http://www.bluesnews.com/plans/1/

Originally posted by john_at_kbs_is:
So, on the gf3/4 I guess I’m stuck with using the same fragment extensions I’m currently using (register combiners, env combine) or is there something more flexible?

well, it has 8 general combiners instead of four, and it has texture shaders to set up from one of some predefined dependend texture reads in front of the register combiners… i would suggest to get at least an ati radeon9700… go for dx9standard… and the ati exts are based on the ARB_vertex_program, so quite handy as well…

Their drivers do suck. We regret ever using the ATI FireGL 2.

[This message has been edited by John Jenkins (edited 09-10-2002).]

the statement “their drivers suck” wich is so often seen is a) not true and b) stupid. they sucked, they had suck, or how ever spelled… but its not true anymore, there aren’t that much with driver problems…
AND
drivers change… if they suck, get an update… its not a natural rule, its just a current state wich can change every minute (okay, every week or month or so )…
and, nvidia drivers do suck as well… i know a lot with problems, so they are sucking badass… in fact, their drivers are very good and its quite normal that they can have problems on some configurations… possibly another driver or even the hw does not work by 100%, so it messes the nvidia driver up themselfes…
both ati and nvidia drivers are of good quality, personal problems always can happen, but are not the general rule… this is badsaying against a company and illegal

Are you saying that their drivers are bether now?

Unfortunately, they’ve still a fair way to go as far as stability goes. Trust me, I’ve actually tried them.

and trust me, others say they work quite well… so whom should i believe now? knacky or some other dudes (wich are more than the negative dudes)…

if i count, ati drivers are okay… and if i count the ones that have bugs with nvidia drivers that i personally know, its about 50:50! so its not that well for nvidia drivers as people mentoin…

Dave, if you haven’t tried an ATI card, don’t comment on the stability of their drivers - you simply aren’t qualified in the most fundamental way.
ATI need to try harder. NVidia are ok.
I’m not being negative deliberately, it’s just my experience.

I can confirm that the latest radeon drivers still have lots of bugs in them - and we’re not talking fancy d3d/opengl features here, we’re talking basic stability in stress-free situations. Unforgivable.

Well I have an NVIDIA card and it’s not the most stable. I think these issues often boil down to mobo + card combinations. Carmack seemed to be impressed with ATI’s latest effort. Is he qualified knackered?

With my GeForce 3 card I got random flickering polys that only seemed to worsen with each driver release, I upgraded my mobo drivers recently and the problems are largely solved although I did get some stability issues. A few more bios tweaks like AGP aperture size and PCI spectrum spread seemed to solve them (fingers crossed).

To get a stable system with any card I think takes a more wholistic approach to the problem. Blaming the graphics card driver seems a little naive. At the very least if you’re trying to engineer a working system in a simulation environment you want to try a range of mobos and bios settings with the card you are interested in.

Hey, what is the ATI equivalent of wglAllocateMemoryNV and anything new on R9700?

vertex_array_object seems to hint as being the equivalent but cant be sure.

PS: I think it was on the ATI’s site where I read that ATI intends to change this idea that “their drivers suck”. Hope they succeed.

V-man

Originally posted by V-man:
[b]Hey, what is the ATI equivalent of wglAllocateMemoryNV and anything new on R9700?

vertex_array_object seems to hint as being the equivalent but cant be sure.[/b]

That’s the glNewObjectBufferATI function (I think that’s it). You can map the array in memory to a pointer using the ATI_map_object_buffer extension, though there isn’t a spec for it, it should be that hard to figure it out from the two entries in the glati.h file.

[This message has been edited by NitroGL (edited 09-10-2002).]

Originally posted by dorbie:
[b]Well I have an NVIDIA card and it’s not the most stable. I think these issues often boil down to mobo + card combinations. Carmack seemed to be impressed with ATI’s latest effort. Is he qualified knackered?

With my GeForce 3 card I got random flickering polys that only seemed to worsen with each driver release, I upgraded my mobo drivers recently and the problems are largely solved although I did get some stability issues. A few more bios tweaks like AGP aperture size and PCI spectrum spread seemed to solve them (fingers crossed).

To get a stable system with any card I think takes a more wholistic approach to the problem. Blaming the graphics card driver seems a little naive. At the very least if you’re trying to engineer a working system in a simulation environment you want to try a range of mobos and bios settings with the card you are interested in.[/b]

Oh god, the big boys are here!
Look, stop this ATI have sorted things out talk. It’s just not true (and I really do wish it were true - the 8500 is a lovely card, on paper).
Dorbie, I have tried 2 (count 'em) radeon 8500’s in many different machines, with many different configurations, p3/4, athlon, duron, dual/single processors, with all recent driver releases, and I very commonly get show-stopping bugs - blue screens, mainly.
Why on earth would I be saying this if it weren’t true?
Just look through the newsgroups to see people with similar problems.
John Carmack couldn’t even get his quake console to render properly, only a few months ago - maybe they’ve fixed all the bugs that directly affect the paths his doom3 engine takes, but that’s no help to the rest of us, is it?
I’ve no doubt you’ve had problems getting your nvidia card to work on your specific machine, dorbie - but that’s pretty irrelevent to what I’m saying.

Another deciding factor:

The r300 has 8-texture channels right? How many will the nv30 have?

Also a game theory related question:

When multi-pass texturing how many texture layers are the maximum you would put down to get the right effect on a poly? Right now I’m stuck with a card that supports only 2 texture channels, so I end up making 2 – 3 passes depending on the effects used. The problem is that my pipeline (due to the math involved) may still need to be split in to at least 2 passes on a card that supports 8+ channels (I’ll still end up using only 4).

Just wondering what you guys thought…

John.

Originally posted by knackered:
John Carmack couldn’t even get his quake console to render properly, only a few months ago

Ehm, that was a beta Radeon 8500, that would be more than a year ago.

Originally posted by knackered:
I’ve no doubt you’ve had problems getting your nvidia card to work on your specific machine, dorbie - but that’s pretty irrelevent to what I’m saying.

Why are your problems more relevant?