I have been so busy working on projects, I am way behind the curve on many issues directly and indirectly related to the future state of fast/realtime 3D OpenGL applications. I should also admit (cuz I forgot to previously) that I switch to OpenGL almost exactly when VBOs and FBOs became available, so I am totally oblivious to (read “ignorant of”) the problems of OpenGL developers with pre-VBO/FBO engines/architectures/infrastructure.
Anyway, I had also not read anything about CUDA for a long, long time - since the information was vague handwaving. Well, I’m almost half way through a quick skim/read of the cuda programming guide and what do I begin to notice? Well, it appears CUDA has an extremely clean, convenient and efficient connection to OpenGL buffer objects. In fact, at first glance, the interface to OpenGL looks cleaner/better than D3D (? surprise, surprise ?). But that’s not the point. The point is, for those several/many people driven nuts by lack of geometry shader support, CUDA seems to provide an excellent (faster, cleaner, more flexible/general/capable) alternative to generate geometry - by spewing procedurally generated geometry straight into VBOs, FBOs, textures!
Since I am behind the curve, I may be missing some gotchas. So please set me straight (Michael Gold or anyone who is “ahead of the curve”). For example, perhaps CUDA setup/breakdown and/or CUDA/OpenGL interoperability has too much overhead. From my brief read/skim, however, that doesn’t appear to be the case.
This experience has made me just begin to seriously grapple with a set of potentially important questions *** for those of us “stuck” with OpenGL *** by inertia, preference, stubbornness, linux-support or platform-independence.
The question is something like this. Since most (actually ALL) of my applications are realtime and will-never-run-fast-enough AND contain physics and other compute-intensive subsystems that will eventually “need” GPU support, is it actually BETTER to shift everything except “explicitly graphical” aspects of my applications out of OpenGL and into CUDA (or OpenCL, assuming OpenCL ~= OpenCUDA).
Off the top of the head, my reaction is “that makes sense”.
This does not answer everything people have been complaining about, but I see more and more aspects it might apply to. And I almost wonder if somebody else had similar thoughts, and that led to plans to shift everything non-explicitly graphical OUT of OpenGL. If nothing else, this makes me less worried that “so many people/projects/organizations will abandon OpenGL, and soon no quality fast/realtime linux/multiplatform support will exist”.
Anyway, I’m curious what less oblivious OpenGL gurus think of this crazy speculation. Am I just making lemonaide outta lemons?