carmack plan + arb2

I wouldn’t be surprised if they used OpenAL for sound. Despite the fact that hardware drivers are somewhat non-existant. That would require only one codebase for Windows, Linux, and Mac. Plus UT2K3 uses it. So it is proven.

the reason for my original post was because carmack mentioned in a previous plan that he was gonna push opengl2 more (+ was gonna code up a path for doom3 with it)

>>NV10 (full featured, five
rendering passes, no vertex programs)<<
also by pass i assume he means pass per light eg 20 passes for 4 lights

also >full featured< thus its all possible with register combiners

There is, AFAIK. But do you see Opengl 2.0 drivers on ATI and nVidia hardware to run it on? I dont think OpenGL 2 will surface until at least the next generation of video cards.

Full hardware support, well, maybe not. But OpenGL 2.0 drivers are being developed already for current cards even though there’s still a long way to go.

I heard from a reliable/visionary source that OpenGL 2.0 will not pass as the monolithic upgrade it was originally intended to be. It’s seen as too much of a giant pill to swallow and inherently against OpenGL philosophy thus far. What will probably happen is OpenGL 1.5 with significant parts of OpenGL 2.0 spec in as extensions, and compatibility with all earlier releases.

As for shader languages things may look different on the other side of ubiquitous ARB_fragment_program availability.

[This message has been edited by dorbie (edited 01-31-2003).]

Originally posted by V-man:
[b]Thanks for the link zed. BTW, how to you find this .plan thing?

If you’re on a *nix machine (or have finger for windows) you can just do ‘finger johnc@idsoftware.com’ and the idsoftware server will give you his .plan file directly.

gl2 is yet here, partially… at least its thoughts.

there is now
ARB_vertex_program
ARB_fragment_program
soon
ARB_vertex_array_object
and
ARB_render_texture

ogl2.0 wants to make a modern interface, unified for the future. as everyone developed its own interfaces for the same tasks. they start now to move together again, to build one strong opengl. thats what the whole discussion about gl2 started, and this is what we have till now.

and more will come. stuff directly from the gl2 specs will get implemented, etc…

Originally posted by dorbie:
I heard from a reliable/visionary source that OpenGL 2.0 will not pass as the monolithic upgrade it was originally intended to be. It’s seen as too much of a giant pill to swallow and inherently against OpenGL philosophy thus far. What will probably happen is OpenGL 1.5 with significant parts of OpenGL 2.0 spec in as extensions, and compatibility with all earlier releases.

This is good news. With the recent talk of a new render-to-texture API, I was worried that things would start to get messy. While there are some good things in the proposed GL2, I think incremental changes to the current GL model is a good thing. I do hope they keep most of the high level shading language. In particular, I hope the driver/compiler will automatically partition complex shaders into multiple passes.

>>>
I’m just happy that there is finally going to be a game that will benefit from my GF3<<<

It’s interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?

>>>If they have real support for 5.1 sound (unlike all the “EAX” games out there) then I’ll be as happy as a pig in …<<<

That’s the dolby digital thing? I know near to nothing about sound technology I have to admit.

Originally posted by V-man:
It’s interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?

Yeah. . . disregarding the great 3d engines he’s created, the fact that he doesn’t just code to the lowest common denominator is quite deserving of respect.

The best part is that his “respect” translates to redefining the “lowest common denominator” for those of us with less exposure. I remember when I was working on an OpenGL project, one of the most effective optimizations was to do rendering the way Quake 3 did it in order to hit that nicely optimized, nicely validated driver path. He definitely cuts big paths for us to follow.

-Won

I remember when I was working on an OpenGL project, one of the most effective optimizations was to do rendering the way Quake 3 did it in order to hit that nicely optimized, nicely validated driver path.

That’s not a good thing. That is, instead, a result of driver makers optimizing the Quake III path solely so they can get higher benchmarks. If you want to do something more than Q3 (programs, VAR/VAO, etc) then you have to take the less optimized path. This is not the way drivers should be written.

Tell that to ATI…

Originally posted by knackered:
Tell that to ATI…

You’d have to go back in time to more than a year ago. No need to tell them anymore. . .

Tell that to ATI…

Don’t just blame ATi for this. nVidia did it long before them.

Once upon a time, the only vertex format that nVidia’s CVA’s were useful for was the Q3 vertex format.

Originally posted by V-man:
It’s interesting how many code paths he has created. I think he has 7 paths. How many companies create that much?

Or rather name one other that does that?


That’s the dolby digital thing? I know near to nothing about sound technology I have to admit.

Yeah. There’s a few different flavours. For example there’s just 5.1 (5 speakers, 1 subwoofer). Plus there’s EAX (EAX2 or whatever creative is up to) which is environmental effects (such as echos etc). Of course now there’s 6.1 (Audigy 2).

I just upgraded to an A7N8X which has nForce and I’ve noticed that it has these effects built in to it (although I actually use my SB Live! Platinum - it’s interesting to see the similarities between the SB product and the nForce).

Oddly enough I finally got to look at Doom3 last night and didn’t pay much attention to the sound at all. What I did notice was that I haven’t been that scared playing a game in a dark room since playing AvP quite some time ago.

If you can produce a product that effective with the optimized Quake 3 pipeline then why wouldn’t you use it? (Incidentally I was quite happy with the performance on the alpha on the default settings on my Gf3)

Korval –

At a time when most OpenGL implementations were highly suspect, having a reasonable “guarantee” that there even existed a reliable driver path was a very big deal. You certainly remember the days of GLSetup, and why they were necessary. Quake3 provided a well enforced minimum standard for OpenGL compliance that other developmers could take advantage of. That’s what I was trying to say.

Quake3 isn’t as relevant now as it was when I was working on that OpenGL project (no, we didn’t name the binary “quake3.exe” as an ATI “optimization” ). It is not really state of the art, and OpenGL compliance isn’t as big of an issue anymore (at least for the basic, non-extension stuff). Now, our standards are higher, as they should be. Example: accelerated vertex array types. Initially, CVAs were fastest only for particular formats, but they eventually were all accelerated.

-Won

Originally posted by V-man:
>>>
That’s the dolby digital thing? I know near to nothing about sound technology I have to admit.

Be aware: OT.

Ahh, that is a shame . There are far too few digital signal processing discussions on this board . Anyone with an interest in this field, be sure to check out dolby laboratories’ and the mpeg’s web sites. http://www.dolby.com/ http://mpeg.telecomitalialab.com/

[This message has been edited by roffe (edited 01-31-2003).]

[This message has been edited by roffe (edited 01-31-2003).]

Come on the ‘minidriver’ issue, (as I’ll label it), goes all the way back to 3Dfx and voodoo cards. Drivers will ALWAYS be written to be best on popular code paths. This isn’t limited to PC cards, it’s true even of high end systems like Infinite Reality. Driver engineering effort is a limited resource that SHOULD be spent wisely. Only a very foolish company or perhaps an academic research group would do othewise.

Carmack has done more to aid OpenGL on PC hardware than anyone. He also does more raw work in terms of algorithmic development & R&D in his game engines to push the envelope than most and has been more consistently open with his code and advice than almost anyone else in real business in his industry. It’s just stunning that someone would suggest he doesn’t like doing work himself. It’s just funny.

Methinks someone has a chip on their shoulder.

[This message has been edited by dorbie (edited 01-31-2003).]

Originally posted by dorbie:
Maybe he’s talking about the result of successive accumulations of relatively dark contributions, that would start to suffer. Only a high precision framebuffer can solve it.

No. Haven’t you read this? http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/008447.html
This technique thing solves exactly that, and it works with a normal low precision framebuffer. If you have a sufficent internal precision to apply exposure in a fragment program, it should look perfect. Accumulating light doesn’t introduce any precision problems here.

-Ilkka