Still new to OpenGL programming. I’m trying to recreate basic real-time painting, for now in 2D. All samples online which I found are based on old fixed pipeline. I wrote a basic frag shader that has uniform in which I feed mouse coordinates, and using gl_FragCoord I am able to recolor part of the screen (texture is placed on the fullscreen quad, on which a 3D scene is rendered prior using FBO → the next step after 2D painting would be to bake back the color to the 3D object using probably uv positions encoded into RG). Based on that, I modify the color of the shader. Now, how can I “remember” the position of lastly colored frag coords so that the underlying texture is modified and all is done on GPU, if possible?
Thanks in advance
“Baking” a rendered image into a texture is done by rendering “in reverse”. Having rendered the scene normally, you then render each mesh using texture coordinates as spatial coordinates (position) and spatial coordinates as texture coordinates.
Spatial coordinates are transformed by the model-view-projection matrix to obtain screen coordinates and depth. Screen coordinates are used to obtain colour and depth values from the rendered scene. The calculated and retrieved depth values are compared and the fragment is discarded (or blended with zero alpha) if there is a mismatch (so that texels which don’t appear in the scene aren’t updated). Otherwise, the colour retrieved from the scene is rendered to the texture.
The problem with trying to update a texture while rendering a projected scene is the irregular scaling. A single pixel on screen may be mapped to multiple texels (minification) or a multiple pixels on screen may be mapped to a single texel (magnification). Both cases can occur within the same triangle. In the minification case, you’d need to inverse-projection the boundaries of the pixel to the texture and fill it with the colour. For magnification, you’d need to average the values from all pixels using a single texel, which isn’t straightforward given that each fragment shader invocation only gets to see its own value, not those for its neighbours.
Thank you @GClements ! I understand the process of baking the rendered texture back to models by reversal. What I have somewhat trouble to comprehend is how to achieve putting back what I change via frag shader (gl_FragCoord + mouse coordinates) back to the texture (I am thinking the simple 2D painting scenario for now). Currently my shader changes the color underneath the frag coord only for the moment the pointer is in the designated screen position and only there. How to accumulate it over frame ticks?