There exist a third way: render to texture.
Outside of WGL_ARB_render_texture (which, since the texture can’t retain this information, can’t really be called rendering to a texture), there is no render to texture in GL.
plus with the current pbuffer extension it is not possible to use glReadPixels after you bind the buffer to texture
Really? God, WGL_ARB_render_texture sucks even more than I thought…
I was going to mention that glReadPixels can be used in a Pixel Buffer Object for async reads (and therefore better performance than glGetTex*). However, since you can’t actually use glReadPixels, it’s kind of annoying.
There is absolutely nothing which would prevent IHVs and the ARB to do exactly the same thing like Microsoft with Direct3D/DirectX.
Well, except for money and their own sanity. The ARB, as an organization, doesn’t actually have much money, so they can’t really hire developers to write code for them. The ARB produces a specification; it is up to each IHV as to how to implement it.
Also, the DirectX mechanism has downsides (downsides that Microsoft themselves is moving to correct in DX10). Specifically, because the Microsoft code is responsible for marshalling and so forth, it cannot be easily optimized for specific hardware needs. As such, while D3D applications run well enough, OpenGL applications can often run faster. Batching primitives, for example, is far more important for good performance under D3D than OpenGL.
I do know that OpenGL currently suck regarding modern GPU Features (glsl with buggy drivers, improper uniform virtualisation, no fx-format, render to texture
First, buggy drivers isn’t GL fault (well, technically not. It is the fault of the glslang spec, which is very complex, being a high-level language that has to be built into drivers). And yes, they’re annoying. But they’re slowly improving.
Second, FX isn’t a function of a low-level graphics API. Even in D3D, FX is done via D3DX, which is an extension library built on top of D3D.
As for RTT, I agree fully.
And just like shown in this thread there is no real will in the OpenGL community to provide a good or competitive API because there always exist some dirty hacks
That, I disagree with. There is plenty of will in the GL community to improve OpenGL. You can see my (among others) ranting and raving about the ineptitude of the ARB on the Advanced forum. However, your glGetTexSubImage is far below the realm of a true “need”; it lies in the realm of a “like to have”. Render to texture, instanced mesh rendering, and the like are actual “needs”, and I’ll not have the ARB working on an admittedly trivial extension when they should be working on getting ARB_fbo out the door.
I prefer to pick my battles with the ARB. Rather than bothering them about little issues in the API, I would rather they hear about the big stuff first.
Though perhaps you do make a point that the GL community does accept using one function for another somewhat readily.
It should be optimized. Hey, it is the inverse operation to glTexSubImage. Is that accelerated? Then glGetTexSubImage should also. If not every IHV could provide optimized implementation it should be specified to be optional and provide a query method to test it.
If you have some querry as to the speed of a function, you’re becoming D3D, and GL doesn’t do that.
You have no guarentee that glGetTexImage itself is “optimized” (given what the function has to do, I’m not sure what it would even mean). In fact, as a matter of a general optimization, everyone says to avoid all glGet* calls because they are expected to be slow. As such, if glGetTexSubImage is slower than glGetTexImage, what’s the problem?