BSP vs Portal systems

Originally posted by LostInTheWoods:
Dot3 only works on NVidia cards[/b]

Who told you that? http://www.delphi3d.net/hardware/extsupport.php?extension=GL_ARB_texture_env_dot3

Maybe i got dot3 and register combiners mixed up. Sorry new to the bump mapping ideas.

Mickey,

Portals cull out most things that you’re not going to see, except for possibly walls in rooms that you only render partly. Most importantly, they cull out meshes that don’t intersect the restricted viewing frustums, so you don’t have to spend time working on meshes that won’t be displayed. They’re sloppier than BSP trees in many implementations, but not THAT much worse.

Another nice thing with portals is that they naturally render from near to far, which means that modern hierarchic Z acceleration will make the card take only maybe a thousand clock cycles to (not) render an entire mesh that’s behind a wall, even if that mesh would fill large parts of the screen and had a 50 instruction fragment program on it.

Lost,

Putting all textures in a single texture sheet is popular on consoles with very limited texture memory, and expensive texture paging. However, your artists will hate you, because they’re used to smacking down two triangles, and then tile a brick wall texture over that with a repeat of four or five. In texture sheets, you can’t use tiling.

[This message has been edited by jwatte (edited 12-04-2002).]

Originally posted by jwatte:
Another nice thing with portals is that they naturally render from near to far, which means that modern hierarchic Z acceleration will make the card take only maybe a thousand clock cycles to (not) render an entire mesh that’s behind a wall, even if that mesh would fill large parts of the screen and had a 50 instruction fragment program on it.

You can have the same result using a BSP, even more accurately. In a portal system, you can have some walls behind others in the same leaf, which mean some overdrawing depending on where is the camera. In a BSP, you’ll always have perfect order.

jwatte,

So your proposal is to send all PVS-determined faces to rendering API, setting for each “room” proper frustum in hardware. I don’t think it can be as fast as BSP in general.

Some time ago when I was implementing and testing BSP+portals working together, I got quite a big speedup (>50%) when I made individual BSP for each sector (cluster in quake). I could spend much less time on organizing buffers for faces, copying verts coordinates, texture coordinates etc.

However good BSPs are not easy to build. In order to make recursive culling fast, leafs should contain reasonably many faces, so you will be still culling groups of faces, not each one, one by one.

BTW in my incoming engine I’m planning to use a bit modified “loose” BSPs that don’t need cutting faces, resulting in as few polys as it’s possible. Before it made some of my wired maps having even 3-5 times as many polys as before building BSP.

[This message has been edited by MickeyMouse (edited 12-05-2002).]

Originally posted by LostInTheWoods:
Ok, yeah i get that now. I was hoping to make it even faster, but i have found the error in that thought. Doh… I will use the best of both worlds. I am going to use my portal system for the rendering code, and the BSP for the collision detection. BUT, im giong to build a BSP for each zone (portal) im in. This way i can cut even MORE checks out.

Axial Aligned BSP is actually a really good idea. I believe quake 1 and 2 use some form of it. That could be worded better. There is a flag for the splitting planes that stores which axis the plane is aligned too as well as if it is aligned. That way if it is aligned the check is simple scalar math. If its misaligned then bust out the DP instructions. Its a simple check and for a game like quake there are enough axial aligned walls and stuff to make it worth the simple “bitwise and”.

Devulon

Originally posted by Devulon:
I believe quake 1 and 2 use some form of it.

Al quakes use arbitrary angle planes for splitting, but axis aligned ones are prefered. In that case, you were right, additional bit indicating axis-alignment is set.

Ive seen some wire frame models of a couple quake 3 maps. They dont seem to be using 2 polys per wall, with a repeated texture. They seem to be breaking them down like I would, to many polys per wall. Or am i mistaken?? I think it would work out alot better that way anyways, since textures might have a hard time lining up with eachother at the points where walls meet. Or being squished to fit.

EDIT: Also, wouldnt creating 2 LARGE polys for one wall, make it near impossible to create light maps? I mean think of all the lighting changes that could happen on one wall. You would have to create a seperate map image for EACH wall. That would start getting insane. Or am i mistaken about this too?

[This message has been edited by LostInTheWoods (edited 12-05-2002).]

DEAR GOD, IM AS DUMB AS A BOX OF ROCKS. I assumed (Makes an ass out of u and me, but mostly me) that light maps are just a multitexture overlay over the original polys in a scene. BUT if is simply create another poly square that holds just the light map, and apply it with some blending to the scene, I can simply use 2 polys for an entire wall, then a couple more polys laid on top to give the effect of lighting.

Sound right???

If your lightmap polygons do not have the exact same vertices as your texture polygons you will have some tricky depth buffer issues to handle. If you don’t use any polygon offset you will have z-fighting between the lightmap and the texture.

Isnt there a way to draw both polys on one plane, but have them blend? Like if i set the light map to a translucent value?? Or does this have to be done another way?

You can draw them on the same plane and use a multiplicative blend. However, they are not guaranteed to have the same depth values at each pixel unless the use the same vertices.

I realize that BSP will give you “perfect” Z ordering. However, I maintain that modeling appropriate levels of detail in BSPs these days is a waste of CPU resources and space. Portals are fast enough, and have no weird interactions with complexity of geometry. If you have other things to spend your CPU and memory on, I’d suggest just using portals, or possibly portals plus a very coarse BSP that’s NOT tied to your actual geometry.

As far as lightmaps go, you don’t have to use the same UV for lightmaps as for regular textures. Thus, you have two sets of texture coordinates; one for textures (which tile) and one for light maps (which don’t). They still are drawn as a single poly using the same verts – that’s what multitexturing is all about.