In my application I’m facing performance issues when trying to render large scenes.
Problems start to arise when I reach the order of magnitude of around 100.000 separate objects.
Typically the scene is built as a tree, with different leaves referencing the same geometry but with a different transformation and/or properties.
In order to have interactive frame rates during zoom/pan/rotate movements I’m filtering the entities so that only a subset of them get drawn, when the movement ends I switch back to my static rendering routine.
A huge problem I’m having is that the static rendering takes so long that introduces a noticeable latency when the user decides to start a new movement.
So I’m experimenting with splitting the rendering of the final image into chunks (the next chunk starts with color depth and stencil from the previous one), so that after each chunk the process is able to process incoming user input and start the dynamic drawing again.
I wanted to ask you if this is a bad solution or could be acceptable (with bad meaning that the solved issues are less than the potential new ones, I’ve just started developing this feature), and if there are other products/projects with a similar design.