Hi, I’m quite new to OpenGL; however I’m working on two fully developed OpenGL apps.
Basically, both are real time simulations; one of ice, another of snow. I want to visually combine the results of each, so as to have ice and snow on the screen at once. This is purely visual… I don’t care about them interacting for now.
I would like to do this with minimal modifications to each app. So my idea is to have each basically render its entire frame each time, but to an off-screen buffer, and then to pull the areas (pixels) that I would like to display from each and combine them to be displayed (ie: if a portion of the ice frame is displayed, the rest is snow). Ideally, I would like to be able to choose which parts of each frame to be displayed almost arbitrarily (ie: not just the top half ice and the bottom half snow, but practically any combination).
Does this sound plausible? Would I be able to do something like render to two different frame buffers and then take any area I wish of each and combine them? Is there an alternative/better approach to do something similar?
Thank a lot!