I am implementing a 3D fluid simulation using OpenGL. At each step of the simulation i need to render to several different 3D textures, however i’m unsure on what is the best approach. Right now i’m iterating over each slice of the 3D texture every frame, everything works but as you can imagine the application performance has tanked. Does anybody know a more efficient way to render to a 3D texture?
What are you actually “rendering”? Are you just rendering a quad and using a fragment shader as a compute shader?
If so, it’s entirely possible that you’re up against the inherent computational complexity of the model and any optimisations would only provide modest gains. It’s impossible to be sure without knowing more details.
Basically yes, I use the 3D texture to store the fluid data, such as velocity, pressure, ecc…
During each frame of the simulation I iterate over all the slices of the 3D texture, then I bind the current slice to a framebuffer and finally I use a fragment shader to compute the data and store it into the texture.
I managed to simulate a fluid in two dimensions using textures as big as 800x600 without having any major performance issues, but when i moved to three dimensions, even a small 3D texture of size 16x16x16 is enough to cause noticeable stutters.
If you can assume an OpenGL version which supports compute shaders (4.3 or ARB_compute_shader), use them.
Otherwise, try using a layered framebuffer, i.e. binding the entire 3D texture to the framebuffer with
glFramebufferTexture, drawing all of the quads in one draw call, and using a geometry shader to set
If you can’t do that (geometry shaders require 3.2 or ARB_geometry_shader4), ensure that you aren’t doing anything which could cause synchronisation between slices, because the overhead of that could easily exceed the computation time, particularly for small textures.