I’m looking for the fastest way to implement a dynamic mesh and would appreciate any advice.
My mesh contains a large number of triangles(500k) and on each frame the location/color of each vertex can change. I need a stereo image so each frame gets rendered twice with just a change in camera location. There is no dynamic lighting, texture mapping or anything else.
The two main possibilities i’ve considered are:
A) vertex buffer objects and altering the vertex locations/colours via CPU and then rendering it once for each eye
B) Having a static mesh preloaded and transforming it using a vertex shader on the GPU. Deformations are from a texture which is loaded each frame. Again it’s then rendered once for each eye.
With option B, given that the vertices are the same for each eye, would the vertex positions be cached and then re-used for the second eye, or would they be recomputed? I’m guessing if they are recomputed then this may be slower than option A