Recently, i have implemented real-time reflection and gi in my cpu renderer, and I want to port them to opengl, these require offsceen writing of fragments to arbitrary offscreen frame position , but i have little expierence with glsl so i don’t know how to do this, any help will be appreciated
Well, shader image load/store lets you read from and write to arbitrary positions of an image - however, you’ll need not only GL4 capable hardware, you’ll also have to catch up with GL4 and GLSL 420. Otherwise it should be pretty straightforward since random buffer access let’s you pull in and write out arbitrary information.
With pre-GL4 implementations, you’re bound to writing via fragment shader outputs which assign values for the fragment in question only - albeit possibly on multiple render targets. You can do random access on textures and buffers but no random writes. Still, if we’re talking about a ray-tracer, this technically this is ok since 1 to many rays generally intersect a single pixel at a time so for each fragment you can access a buffer storing the current ray information for the fragment in question. I assume, of course, that we’re rendering a full-screen quad so each fragment maps 1:1 to a pixel.
The alternative is to first port your stuff to OpenCL and let the GL only render the result. For my taste and from my experience with a prototypical two-bounce raytracer, doing this in OpenCL C is much more convenient and the necessary porting effort is much more tractable. And if you want to do stuff with OpenGL, you can always use CL-GL interop to process intermediate or final results computed with OpenCL and vice-versa. The downside is, you might not have actual GPU support depending on the hardware configuration and would thus be forced to have the CL do all the processing on the CPU anyway. But that’s a price to pay when using multi-vendor, multi-platform APIs - just something to think about beforehand.
For both approaches, however, there is a major concern: how do you represent your scene in GPU memory (or more generally in device memory with OpenCL) and how do you leverage acceleration structures directly on the GPU? Do you determine the set of visible objects on the CPU and update some buffer storing the scene information? Do you send the scene once because it fits completely and do any visibility calculations on the GPU directly? Can you employ acceleration structures directly on the GPU or do you need CPU roundtrips?
Another problem, when wanting to do GPGPU and/or graphics programming you need to know about the underlying hardware to be as efficient as possible - and as if knowing all the stuff involved in graphics programming and OpenGL/CL in general wasn’t enough, hardware details differ from vendor to vendor. General rules apply to all platforms but the nitty-gritties are to be looked up in the respective CL developer guides from AMD, NVIDIA and Intel. sigh
What are we talking about here, a raytracer, a radiosity solution, a hybrid? What kind of datasets do you want to process?
Thanks for your quick and detailed reply, i will take a look at shader image load/store and opencl you mentioned
My major concern is portability and stability, so i first write a software api and then “translate” it to opengl. sw/hw function names and their goals are the same, internal procedures might differ.
Its a hybrid solution. However, i’m using custom longitude/latitude datasets for reflection and gi. If i’m using my cpu renderer, i can do anything in my fragment shader callback function since it’s a c function, so memory access is not a problem: i first calcuate the pixel color and its longitude/latitude coordinate, then I write the pixel to memory using the coordinate. But i can’t achieve these in glsl, there are restrictions. If things are getting too complicated, maybe i’ll use cubemap/ssao.
maybe i’ll use cubemap
Have fun with interreflections.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.