It’s probably not an original idea, but I haven’t come across it yet. Just wondering if this makes any sense…
Many engines render objects in full detail. Some adjust the level of detail based on the distance of the object from the camera.
What I want to do is, based on the current window dimensions and FOV, estimate the approximate projected size of an object’s bounding box or shere, then base the level of detail on that size.
The idea is that if the on-screen space of an object is smaller than a certain amount, I could render it as a single pixel or simply not render it at all.
Slightly larger object might be rendered as a blob of points or a small sphere (or maybe I’d just render the actual bounding box/sphere).
After that I’d just vary the amount of tesselation that’s used for objects. Some complex objects might even maintain multiple geometry lists and use a different one based on the LOD.
As a silly example, this would allow you to have a fly buzzing around that has a very large mesh. Most of the time it would be rendered as a simple point, but should the viewer happen to get very close and increase the magnification, only then would the full geometry be used.
My point is, in a complex world with many objects in a wide spectrum of sizes, LOD based on distance alone seems insufficient. Doing something like this could make a HUGE difference in the number of triangles sent down the pipe.
Does this concept make any sense? If so, has anyone done this? How would I go about calculating the size of the projected bounding volume?