best space partitioning in this case...

I have a map structure which divides the geometry into portals and generates independent bsp for collision-only purpose.
I have been thinking that what would be the best way to partition the sectors into smaller pieces. I think the best way would now days be to just make tri-strips out of the sector geometry and render them in big chunks by using VAR/CVA.
However, I will need to get the polygons in a certain radius, when rendering per-pixel lighting+shadow volumes (so I wouldn’t have to render the whole sector if a light is so weak that it lights up a few polygons). I think it might be benefitial to render shadow volumes that are in the radius of an emmiting light.
Now, the 1st thing I was considering is to partition each sector into an octree, so that each node would store, lets say 10 polygons. Then I would have to still group these polygons by the texture they have.
This might be faster for rendering the shadows, but on modern HW this would be quite slow when thinking the actual polygon processing, don’t you think? I couldn’t get the advantage of tri-strips with only 10 polys.
I would be interested to hear any ides of how to do the partitioning in the most efficient way.


How about an octree with BIG chunks. That’s what I’ve been thinking about for my little engine. More exactly, arrange your tree so that nodes at certain level can be considered as individual objects which can be efficently drawn with VAR/VAO. That mostly means that they don’t share vertice with other nodes. Let these nodes be quite big, a couple hundred polygons at least. For drawing purposes you consider this the smallest unit in your tree. You should, however, let the tree go deeper for collision detection and stuff, those lower nodes just index the faces in the upper draw node.


That’s something to consider. Thought my maps will not have much polygons, so 300 polys in a single node(leaf) would be much. I could try make it to render a large node as a big chunk if it’s inside the radius of a light. Then I could go deeper into that node to find the exact polygons in radius when rendering the shadow volumes. Afterall, the shadows are the thing that are going to kill my framerate, and distance attentuated
light are a must.

Still, I would be grateful if anyone could offer any other suggestions, so I could have different aspects.

For each light, generate the bounding rectangle in screen space for how far you think it should have noticeable impact. Then scissor to that rect. Should save a pretty penny on fill rate.

Also, you could use a crude octree within each portal zone, if the portal zone is “large”. That’d give you an additional geometry savings in many cases.

If I allready render the polygons that are in the attenuating range of a light, what would scissor test help? I mean that wouldn’t it not cause unneeded fillrate consuming allready?

I’m just trying to find a way to allways send as big chunk as possible to the GPU as tri-strips, no matter how small area it would be. With normal octree, I would still have to send those small nodes one by one to the gpu, no matter if there is one big node visible (if the polygons are only in the leafs). To send all the polys in a large node at a time, I would have to have the information in every node, and it would take memory. I just think would it be too much?

At that point, it’s so dependent on your geometry, that all you can do is to implement and benchmark.

If you err on the side of sending more geometry than strictly necessary (i e, your leaves are fairly big and chunky) then you can certainly save fill rate with scissoring.

Also, if your geometry contains large, untesselated triangles, the same thing happens; even if only a corner of a 20 meter triangle is touched by the light, the entire triangle needs to be rendered. If you’re doing vertex lighting, then that triangle will look bad, btw :slight_smile:

Besides, the scissor test will also cut your shadow volume triangles, which are a big fill hit too. It’s an easy optimization, I suggest you use it no matter what you do with your geometry.


“Besides, the scissor test will also cut your shadow volume triangles, which are a big fill hit too. It’s an easy optimization, I suggest you use it no matter what you do with your geometry.”

  • I see the point. BTW, I havent got time to get into per-pixel lighting a lot yet, but which one causes more performance hit: the fillrate when rendering the lit geometry, or the calculations for each polygon that are required?