I am a newbie in OpenGL.
Recently facing a problem: I want to split a 4K display window into 8*10=80 viewports array, each viewport displays a unique viewing angle of the 3D scene.
Large size image and large number of viewports.
Is there some efficient way to implement it?
Thank a lot for your help!
I am a newbie in OpenGL.
Usually just start by making the first window… then the 2nd… then the next, and by the time you’ve made probably the 2nd window, you’ll have already figured out how to optimally make 80 windows.
I’m not even kidding, this is how I figure stuff out sometimes…and if it doesn’t come to me right away, at least approaching the problem and trying to work through it, makes it stick in my head until I figure it out.
With OpenGL 4.1 or the ARB_viewport_array extension, it’s possible to render to multiple viewports in a single draw call (glViewportIndexed/glViewportArray, gl_ViewportIndex). However, this requires the use of a geometry shader, which usually has a performance cost. It would also require you to determine which viewports each primitive must be rendered to (or render each primitive to every active viewport). And you’d need to transform vertices for each viewport in which they appear. You could move transformation to the geometry shader, but that would mean redundant transformations for shared vertices.
So a naive implementation would probably be less efficient than just rendering the scene to each viewport, while an efficient implementation would be substantially more complex. Also, an implementation isn’t required to support more than 16 active viewports, so you’d still need to use multiple draw calls.
The ARB_shader_viewport_layer_array extension allows gl_ViewportIndex to be written by the vertex shader, but this feature isn’t available in any core version. It also doesn’t help with any of the other issues besides the requirement for a geometry shader. The OVR_multiview and OVR_multiview2 extensions are more useful, but (AFAIK) not widely supported; these allow multiple views of a single set of geometry, with a separate transformation for each view.
In short, approaches other than rendering to each viewport in turn will be significantly more complex, less portable, and not necessarily faster.
I second the GClements comments about techniques which use a geometry shader and performance. Have tried this. There’s not much perf benefit to using a geometry-shader based broadcast approach vs. just rendering to the viewports separately, and the latter gives you lots more flexibility.
There are a number of techniques which can help simultanous multiviewport rendering perf when the FOVs of your viewports overlap singificantly, but that’s not the case with your setup, so I won’t bore you with them.
If your scene is fairly intense (compared to your GPU’s geometry transform and rasterization capacity), you’re likely to need a completely separate CPU (or GPU) frustum cull pass on the front-end and submission of completely different sets of batches to different viewports, to avoid wasted geometry transforms and primitive setup rastering geometry within a viewport that’s only going to be thrown out before rasterization.
Then again, if your scene is trivial, you can just re-transform and re-rasterize everything for all frustums, verify performance is good enough, and call it a day.