Carmack's .plan

JC has an updated .plan where he discusses feature and performance differences between nVIDIA and ATI, as well as the new naming conventions.
http://www.shacknews.com/finger/?fid=johnc@idsoftware.com

Just thought some of you might be interested

– Zeno

Oh sure, (almost)everyone in this forum already said that nvidia should have named the GF4 MX something else, and look who gets on the bandwagon!

Pretty interesting indeed!

What i think to be most interesting, is that the ati drivers initially have that much problems. I mean how can it be that they have so much problems delivering bug free drivers.
JC didn’t mention the number of bugs he found and where, with the drivers, but it seemd there weren’t less.
What about you guys here with radeon cards, have you experienced any problems using the ati-extensions ?
Or is JC using features nobody else from us even thought about. I haven’t read any thread here around describing serious problems with the radeon.
I for my self can’t test this, cause i only own GeForce Cards.

Lars

It’s interesting that John Carmack is suffering with z buffer invariance between passes when he switches between normal rendering and vertex programming on NVIDIA. I wonder if this is beyond the scope of glPolygonOffset to fix or if a reliable working glPolygonOffset implementation would solve the problem.

Well Carmack did say that NVIDIA is supposed to make a driver fix or something that will let us use vertex programs and fixed functions in the same pass. I sure hope so, that problem caused me a lot of grief not too long ago. Also from what JC was saying about the ATi pixel shader stuff, it actually made me wish i had a radeon. The ability to read from a texture twice sounds darn cool. Would it be possible to add this kind of support for the geforce 3 and 4Ti cards in the drivers? Or would that have to be a hardware change? Unfortuantly i know the ability to have 6 texture units would have to be a hardware change, too bad.

Also while we are on the topic of carmack discussing the two cards (ati and nvidia) i found that it was kind of wierd that the ati cards support 2 nvidia extensions yet the nvidia cards do not support any ati extensions. What is up with that? I think it would be cool for nvidia to support all ati extensions and ati to support all nvidia extensions. But i dont think that would be possible since both chipsets are very different. That would make a larger die prolly and make the chips cost a hell of a lot more. O well, we still can dream cant we?

-SirKnight

This is a complex issue. A reasonable rule of thumb is that both NVIDIA and ATI will do what’s in their own interests. ATI & NVIDIA are not in the same position so their actions are different, but I wouldn’t put it down to altruism. There may also be other issues we never see like I.P. ownership. Personally I think it’s in both their interests to work on common API extensions because it will drive high end sales. The biggest obstacle to high end penetration is advanced feature support in games and the biggest obstacle to advanced feature support is a lack of easy to use, shared vendor API extensions for those features.

[This message has been edited by dorbie (edited 02-11-2002).]

> What about you guys here with radeon cards, have you experienced any problems using the ati-extensions ?

Yeah, tons of problems. At a time i was thinking of simply switching back to my Geforce 2, but after reinstalling the drivers 4 times and reinstalled Win2k, i finally got ride of the (main) bugs. To give you an example, the wireframe mode with texture mapping was randomly crashing my application.

Y.

JC has posted the following update to his plan.

“8:50 pm addendum: Mark Kilgard at Nvidia said that the current drivers already
support the vertex program option to be invarint with the fixed function path,
and that it turned out to be one instruction FASTER, not slower.”

Thought this may be of interest.

Originally posted by Lars:
[b]What i think to be most interesting, is that the ati drivers initially have that much problems. I mean how can it be that they have so much problems delivering bug free drivers.
JC didn’t mention the number of bugs he found and where, with the drivers, but it seemd there weren’t less.
What about you guys here with radeon cards, have you experienced any problems using the ati-extensions ?
Or is JC using features nobody else from us even thought about. I haven’t read any thread here around describing serious problems with the radeon.
I for my self can’t test this, cause i only own GeForce Cards.

Lars[/b]

I had some problems with my computer freezing with certain vertex shaders on some drivers, but I did away with vertex shaders since they weren’t much use anyway in my project, not sure if it’s been fixed. There were also a problem with glSetFragmentShaderConstantATI() which I was going to report just to find it was solved in the latest drivers before I got to report it. There were a problem with going in and out of fullscreen mode, not a biggie, has been solved. There were a problem with mipmapped cubemaps, which supposedly have been solved but no driver with the fix is available yet, but should be within days.
With the latest driver the only problem that remains for me is the cubemap bug, but it can be worked around temporally by turning mipmapping off.

There were a problem with mipmapped cubemaps, which supposedly have been solved

it IS fixed in the driver 6.13.2552

How do we enable this option to enable Vertex programs, and fixed function paths to be used together in multipass?

Surely if it’s faster than before, why not just make it the norm all the time, or will this break stuff?

Who the hell is this John Carmack ???

Sorry, couldn’t resist, those who have read a certain post on slashdot will understand the joke

Does the 8500 support VAR? Or have its own version of VAR? (is that what ‘vertex objects’ are in carmacks article?)
I’d be lost without VAR, it’s really given my project a boost.
I tried an 8500 a few months ago, but the drivers were so bugged that I didn’t have the time to wait for their updated drivers. I was also getting worse frame rates on it than the geforce2 gts, when I finally got it running an opengl app.

Does the 8500 support VAR?
no

Or have its own version of VAR? (is that what ‘vertex objects’ are in carmacks article?)
yes, and it’s much better than VAR, you don’t have to manage and synchronize the AGP memory.
you just call glNewObjectBufferATI() with a byte size and the pointer to your data, and you have an ID for your data in fast memory.
Then you use glArrayObjectATI() instead of gl*Pointer().

I’d be lost without VAR, it’s really given my project a boost.
GL_ATI_vertex_array_object is fast too

I tried an 8500 a few months ago, but the drivers were so bugged that I didn’t have the time to wait for their updated drivers.
it’s true that the drivers are still bugged, but it’s getting better…

Originally posted by Nutty:
How do we enable this option to enable Vertex programs, and fixed function paths to be used together in multipass?

Maybe it’ll be exposed through the new GL_NV_vertex_program1_1 extension?

– Tom

Doh! Opened my big mouth again.

[This message has been edited by Gorg (edited 02-12-2002).]

Tom,

Yes, position invariance will be exposed in NV_vertex_program1_1.

Docs should be available soon.

Thanks -
Cass

Cass, How come Mark Kilgard says this new feature for invariance in mixed pass rendering is slightly faster than normal?

Just curious thats all… If it’s a tight lipped secret I understand.

Any time scale for when nvidia’s new extensions are likely to be published?

Cheers,
Nutty

I think it’s Carmack saying that the vertex program he writes with matched Z is one instruction less than his earlier vertex program without matched Z. Maybe it was mjk saying it, but the jist is the same.

[This message has been edited by dorbie (edited 02-12-2002).]