Doom3

You didn’t bother to take any time to understand what I was saying so I give up on trying to explain any further.

Let the hardware do HSR for you.
I don’t mean clipping or culling this is our job, but 0 overdraw is up to the hardware not us.
Many card are goind to do that for us, so try to optimize for something else than 0 overdraw.

PowerVR cards already do 0 overdraw for you and even sort the pols in the right order to take care of correct transparency effects.

I know that next gen 3D cards will take care of this for us…
While we are waiting for them to become standard equipement, we should take care a bit of overdraw.

(don’t be extreme, sending everything to the card with massive overdraw (more than 10/pixel) will not help you…)

Originally posted by skw|d:
You didn’t bother to take any time to understand what I was saying so I give up on trying to explain any further.

I understand perfectly what you’re saying, but understanding what you say doesn’t necesarrily mean that i have to agree with you!
And i can point a finger straight back at you and say you don’t try to understand my point.

Maybe one could say that software culling moves up to higher and higher levels:

clip triangles=cull pixels
cull triangles
cull displaylists (bunch of triangles)
cull even bigger displaylists
The card is doing everything for us

Arne

Yup, sure seems that way…
Unfortunatly the current pc architecture makes it hard for graphics cards to do more and more work for us since the bandwidth is becomming a bigger and bigger problem…
The best solution would be some sort of shared memory architecture, a bit like in the xbox (altho i’m not sure if it is or what would be the best implementation of shared memory architectures, that’s not really my expertise

An added bonus of shared memory architecture that you’d basically be able to retrieve data from opengl without almost any performance penalty

It’s always nice to find a board with quality threads like this one.

Hey people ! Are you sure this is true and not only rumors ? I think it sounds a bit “magic”.

People, don’t you think you need more than 20 secs to compile a 100’000’000 polys map ? And doing this at loadtime ? Carmack I don’t trust you ! Or it will be HELL slow…

Or, MUST WE DO THE VIS PROCESSING WITH OUR HEADS !!! with brush visibility at map editing time ???

Maybe some people are staying too much in front of their pc and should look outside to see what they’re passing by…

You can compute Quad, Oct or kde-Tree at runtime, it’s not that long, however it’s not the fatest way to optimize your 3D VSD.

About the PC architecture, AMD is moving to Alpha arcitecture keeping x86 compatibility, has you know the Alpha architecture is much better.

About 3D cards and Bandwidth:
I recommend that you go and read http://www.beyond3d.com and check for the PowerVR explanation, see http://kyro.st.com and http://www.powervr.com

Those cards are VERY bandwidth saving and show hardware manufacturer the way to go.
I know that many big firms are following them in a form or another (not tiling and deferred rendering but full color buffer in on chip memory…)

Unified Video board memory is something nice, shared memory is not possible with the current databuses in our PCs.
(Damn too slow)

Originally posted by The Scytheman:
It’s always nice to find a board with quality threads like this one.

Hahaha, you know, i can’t tell if you’re being sarcastic or if you really mean that
I’ll presume the last ;o)

Originally posted by Crusader:
Hey people ! Are you sure this is true and not only rumors ? I think it sounds a bit “magic”.

Yes i’m sure it’s true because i was at quakecon when Carmack had his speach.
And i really don’t think Carmack would have lied about all this, that would be so out of character.

Originally posted by Crusader:
People, don’t you think you need more than 20 secs to compile a 100’000’000 polys map ? And doing this at loadtime ? Carmack I don’t trust you ! Or it will be HELL slow…

Well i seriously doubt doom3 maps will have 100 million polygons
But still, yes it is possible…
You have to realize that doom3 will be build using different vsd techniques.
It won’t have VIS like in q3a (maybe something simular, but it’ll be different) and it won’t have to do any lighting precalculations since all lighting will be done dynamically using the stencil buffer…
It’s difficult, but not impossible

Originally posted by Crusader:
Or, MUST WE DO THE VIS PROCESSING WITH OUR HEADS !!! with brush visibility at map editing time ???

Yes, partly.
Which isn’t a big deal, anyone who’s ever made a map for q2 or q3a knows that you have to use a lot of clip/skip etc. brushes…
That’s basically “processing with our heads”…
I also suspect that some vis precalculations will be done while editing the map… In such a way that you don’t actually notice it.

Originally posted by Crusader:
Maybe some people are staying too much in front of their pc and should look outside to see what they’re passing by…

Maybe, but that’s a completely different topic ;o)

Originally posted by Ingenu:
About 3D cards and Bandwidth:
I recommend that you go and read
<snip>

that’s interesting, cool.

Originally posted by Ingenu:
Unified Video board memory is something nice, shared memory is not possible with the current databuses in our PCs.
(Damn too slow)

True, but the xbox uses a different type of bus, i don’t know the specifics, but i do know that the xbox has a shared memory architecture (so has the ps2 btw)

Hi!
You´re always talking about doom3´s lighting system using the stencil buffer…Can someone give me a hint how the stencil buffer is used for lighting, or a good link?

Thanx in advance, XBTC!

Stencil is used for shadowing. Carmack said he wanted to do all the lighting with Dot Product bump mapping.

theres plenty of documentation on the topic at http://www.nvidia.com/developer.

I also have some articles on it at my site: http://www.ronfrazier.net/apparition/research

To give you a quick rundown of the 3 common shadowing techniques:

1)Shadow Volumes: uses the stencil buffer to determine which objects are in the shadow of another object. Probably the best choice for current hardware, as it is relatively fast, looks good, and widely supported.

2)Depth Shadow Maps: uses a dynamicly created texture and the stencil buffer to determine if any object sits between each pixel and the light source. Pretty slow and typically not as good looking on current hardware. As hardware becomes faster and memory increases, this technique should eventually become the best of the 3 techniques. I have actually heard that this is the technique used by advanced animation studios (like pixar). It works for them because they have insanely fast hardware and dont have the real time requirement most developers typically do.

  1. Index Shadow Maps: Very similar to depth mapping but uses a polygon or object index instead of depth to make comparisons. I Dont have much to say about this one. I Dont think it has a lot of usefulness (Im thinking of the lyrics to the song “War”). It has the same problems as depth mapping, and then some.

Thanx alot,guys!
I think I´ll look into that stuff(Carmack said that he started the doom3 render on top of the Q3-Renderer,perhaps I can do the same with my Q3-Viewer ).

P.s.:Cool site FireStorm…Lots of interesting stuff.

Greets, XBTC!

Originally posted by Gorg:
Stencil is used for shadowing. Carmack said he wanted to do all the lighting with Dot Product bump mapping.

Yes you’re right, i ment that, but should have been more clear.

Originally posted by XBCT:
Thanx alot,guys!
I think I´ll look into that stuff(Carmack said that he started the doom3 render on top of the Q3-Renderer,perhaps I can do the same with my Q3-Viewer ).

Well i actually think he rewrote most of the rendering part of the engine…

Originally posted by XBCT:
P.s.:Cool site FireStorm…Lots of interesting stuff.

Now i’m confused… which site are you refering to?

>quote:
Originally posted by XBCT:
P.s.:Cool site FireStorm…Lots of interesting stuff.
Now i’m confused… which site are you refering to?<

Uups sorry, I meant the site of Kronos…

Greets, XBTC!

People, specially Firestorm,

I don’t want to contradict anyone but, do you really thing that we will be able to catch Freedom with editing ???

This is the most important thing in life and it is not reached, and video games are made to do this in some cases, so whats the point if you have an engine with no freedom !

Artist must NEVER have to handle theese troubles so programmers are and will EVER be here for this reason nowadays (?) Surely I will be wrong for many, but if we think wider, all this is nonsense…

Please, I really like to edit maps ! And I don’t want to have CARMACK’s new norms of designing !!! Please don’t help this or it is going to be like Microsoft ! One leader

I hope I made my self clear, if not, excuse me.

They are many 3D engine out there, you don’t have to use mr Carmack one if you dislike it.

I don’t see your point here.

You support only what you want to.

Originally posted by Crusader:
[b]People, specially Firestorm,

I don’t want to contradict anyone but, do you really thing that we will be able to catch Freedom with editing ???[/b]

I know what you mean, and deep down inside i feel the same way.
The problem is, however, that programming requires a lot of energy and time… even for things that seem to be absolutely trivial…
And when you have only have one year to build your engine and game, you simply do not have time to add all the features you’d like.
Sure, you could hire more programmers, but that’ll make the project more chaotic, harder to organize (trust me, having more than 7 programmers is a mistake) and more expensive (not all software houses make millions with each game they make, most can barely stay alive)

Yes i know the tools that are given to the artists to make their stuff with have a lot to be desired of, but it’s simply not realistic to create the super tools we’d all like to have…

Ofcourse you could improve your tools with every new game you use your engine with…
But that would make it nearly impossible for you to develop drastically new technology… so you’d be kinda stuck with your old stuff…
And the game will suffer because of that…

So the only solution is to make life a little harder for the artists, but being able to generate a awesome game.

Ofcourse, another solution might be to use third party tools like 3dsmax etc.
That way you’d have good tools with lots of documentation and flexibility and you’d probably only need to write a plugin.
Altough sometimes the extra functionality actually gets in the way (artists using functionality that isn’t supported)
And most people wouldn’t be able to afford 3dsmax, so you can forget about amateurs making mods etc.

But thankfully they’re going to release a ‘free’ version of 3dsmax for soon…
But it’ll cost money to release plugins for it…

If you want super tools, you don’t have to
write them from scratch. Use Maya, or maybe
something cheaper like 3dsmax, and write a
converter or exporter which exports your
level format from the model file.

Some games development companies do this,
and they often rave about how productive
their artists are because of it. It’s all
the people thinking that they can get this
level of support for free which amaze me.

Originally posted by bgl:
If you want super tools, you don’t have to
write them from scratch. Use Maya, or maybe
something cheaper like 3dsmax, and write a
converter or exporter which exports your
level format from the model file.

Didn’t i just say that in the last message?

Originally posted by bgl:
Some games development companies do this,
and they often rave about how productive
their artists are because of it. It’s all
the people thinking that they can get this
level of support for free which amaze me.

Well it’s in the best interest of the game companies as well, since a game that has a lot of mods for it will sell better and live longer…
And a game that has no (decent) tools for it, will be very hard to make mods for…