# Level of detail idea

It’s probably not an original idea, but I haven’t come across it yet. Just wondering if this makes any sense…

Many engines render objects in full detail. Some adjust the level of detail based on the distance of the object from the camera.

What I want to do is, based on the current window dimensions and FOV, estimate the approximate projected size of an object’s bounding box or shere, then base the level of detail on that size.

The idea is that if the on-screen space of an object is smaller than a certain amount, I could render it as a single pixel or simply not render it at all.

Slightly larger object might be rendered as a blob of points or a small sphere (or maybe I’d just render the actual bounding box/sphere).

After that I’d just vary the amount of tesselation that’s used for objects. Some complex objects might even maintain multiple geometry lists and use a different one based on the LOD.

As a silly example, this would allow you to have a fly buzzing around that has a very large mesh. Most of the time it would be rendered as a simple point, but should the viewer happen to get very close and increase the magnification, only then would the full geometry be used.

My point is, in a complex world with many objects in a wide spectrum of sizes, LOD based on distance alone seems insufficient. Doing something like this could make a HUGE difference in the number of triangles sent down the pipe.

Does this concept make any sense? If so, has anyone done this? How would I go about calculating the size of the projected bounding volume?

Toom

Yeah, its a good idea, and its what many LOD algorithms do already. For one thing, just using distance won’t work if the angle of your view frustrum changes - suppose you have a sniper scope or something and you zoom in on distant objects. Your view frustrum suddenly gets very narrow. The distance from the viewer to the objects being seen hasn’t changed, but the level of detail at which they are being drawn has.
Another example is subdivision (i.e. ROAM) terrain rendering algorithms. Usually in these cases only the screen space magnitude of the vertical error in the terrain is considered - so subdivision will be less when you look at terrain from above - or at least I think it is.
A lot of people talk about LODing as if the switches only happen with distance - and in some cases the implementations reflect this too.
I don’t really know the best way to project your geometry’s bounding box into screen space.

I read an interesting paper on just this
issue, but I forget where

The approach was to calculate, for a mesh,
a set of “core” unremovable vertexes (taking
them out would distort the shape too much),
and then for each other vertex, calculate
the maximum visual error which could result
from removing the vertex. The vertex with the
least impact is chosen and removed, and the
mesh re-constituted. New calculations of
visible error for all remaining vertexes, etc
until you have an ordering for how to remove
vertexes, as well as a measure of how much
visual distortion (typically in % of total
screen size) removing that vertex will cause.

You then figure out the size of the bounding
box/sphere of your model, calculate the
appropriate quality level, and draw only as
many vertexes as needed for less than a pixel
worth of visible error (times some LOD
preference factor). The nice thing is that
you can sort your vertex array from most
important to least important vertex, and thus
use glDrawRangeElements() or similar to only
are always contiguous from the start of the
buffer).

The drawback is that you have to change your
triangulation data for each vertex you
remove. The information needed for each
removed vertex is where each of the incoming
edges get moved to, and which of the incoming
edges get collapsed and thus their triangles
removed. With some smarts, you can sort your
triangle data to allow for easy modification
when going up/down in detail level, but it
is a major wart on an otherwise very elegant
algorithm.