I have made my engine support OpenGL and Direct3D 11. From my experience, most problems can be fixed or worked around with some effort, but there is ONE major problem where an engine cannot be made entirely API agnostic.
And that is the fact that OpenGL has its texture-coordinate origin (0,0) in the lower left corner, and D3D in the upper left corner.
The problem with this is, that it totally messes up all texture-coordinates in your data.
My work-around so far is, that i load all textures up-side-down in D3D, that fixes all texture-coordinate problems. However, now all render-to-texture operations create up-side-down results, because you just messed up D3Ds conventions.
So i added something to my engine, where the modelview-matrix is silently “patched”, whenever one renders to textures, so that the geometry is rendered up-side-down into the texture, fixing nearly all problems.
The remaining issue is, whenever people do unexpected things in shaders, where they don’t use the engine-provided modelview-matrix and thus don’t flip the image.
And finally, the OUTPUT, that means the image that is rendered to the backbuffer, must not be flipped, or your result ends up wrong. People have to be absolutely aware of that, i can’t magically fix this entirely.
Since the graphics-hardware obviously is able to do both and MS has no intentions in making D3D more portable to OpenGL, it would be great, if at least OpenGL would allow to set a state, to set the texture coordinate origin to the upper-left corner.
That would remove the one big obstacle that currently exists, if one wants to port an engine between OpenGL and D3D.
The switch MIGHT but does not really need to, also influence the window-origin, used for setting the viewport, scissor-rect and possibly some other functions. But those are actually very easy to fix in an engine, so that’s not important. The really important thing is only, that a given texture coordinate samples the same location in a texture.