glGetTexSubImage

I’ve just found out that there exist no trivial way to read back sub regions of textures which I really find astonishing till ridiculous. Are there any reasons not to provide such a function other then laziness or forgetfulness? I mean alone because of symmetry should exist glGetTexSubImage. If you can write to sub regions it is imo mandatory to provide the inverse operation if it is not write only memory which textures aren’t because of glGetTexImage :confused: :confused: :confused:

Was glGetTexSubImage forgotten? And why didn’t it got introduced with the various OpenGL version?

That’s very interesting. I think that it wasn’t added because there wasn’t a perceived need. Certainly, I’ve never heard anyone ask for it before now. :wink:

Once EXT_framebuffer_object is available, you’ll be able to get this functionality be binding the texture to the framebuffer and using ReadPixels.

and why then exist glTexSubImage? Yet alone by symmetry reasons there should be glGetTexSubImage!

With forum search I have found at least 2 posts asking for glGetTexSubImage. And if there would at present be no use at all for it that wouldn’t be a reason to left it out, especially with such a trivial thing.
I currently could use it to for debugging some preprocessing on the GPU. Of course you could by some dirty hacks do it with glReadPixel before binding the pbuffer to texture but that’s really ugly.

What makes me seriously concerned about the opengl future (not only regarding this question) is common tendency ammong some OpenGL users but especially ARB members and driver writers to be extremely restrictive (lazy?) about further OpenGL API development :frowning:

Provide primitives not solutions.

That should include not to ask “do we need it?”, “how many need it?” “and what for?” but to provide generic, and wide usable features. Regarding this feature: obviously it was thought access to subregions of texture would be reasonable feature. So why restrict it artificially by only allowing write access? Only because nobody yelled I WANT READ ACCESS?

And if your argument with EXT_framebuffer_object and glReadPixel was meant serious: Ok, then please remove glTexSubImage from OpenGL because the same could be achieved with render2texture and glDrawPixel :rolleyes:

The reason TexSubImage was introduced was because TexImage implicitly performs memory allocation. This makes it very, very heavy-weight. I think you’ll find that there’s a big performance difference between the speed of TexImage and TexSubImage to replace the entire texture image. The same would not be true of GetTexImage vs. GetTexSubImage.

I found the two threads you mentioned. They’re both two years old. The first one didn’t even have anything to do with GetTexSubImage. It turned out to be a VisualBasic problem. The other problem would be better solved by render-to-texture (using ColorMask) than GetTexSubImage anyway.

Implementors are reluctant to add features that they don’t think will be used because it adds code, that has to be maintained, to already huge code bases.

Dunno…

Originally posted by idr:
The reason TexSubImage was introduced was because TexImage implicitly performs memory allocation. This makes it very, very heavy-weight. I think you’ll find that there’s a big performance difference between the speed of TexImage and TexSubImage to replace the entire texture image. The same would not be true of GetTexImage vs. GetTexSubImage.

I would argument the other way round, if you introduce a feature (independent of the reasons) do it the right way (in this case with the inverse operation).

And regarding your argument: also if there would only be a minor speed difference with big textures (floating point, 1024x1024 or bigger) and only needing to read back some small subregions it still is uneccesarry comlicated to read back the complete texture (16MB) only to throw away most of it afterwards…

Yes, as always you can live without this feature, there always exist some ugly work arounds :rolleyes: but with such an attitude you only will get so far as no serious competitor exist who supply solutions with less work arounds, hacks and special cases…

[b]
Implementors are reluctant to add features that they don’t think will be used because it adds code, that has to be maintained, to already huge code bases.

Dunno…[/b]
I worry this will be exactly the reason why OpenGL will loose the competition with directx/direct3D on the long run. And it is imo also a wrong argumentation for designing libraries/API others should use. The main point for APIs are elegance and usability for the application programmer paired with performance. So the main point shouldn’t be how hard is to implement but if things get introduced do it the right way and don’t introduce crippled features.

If IHVs would really have an interest in a strong OpenGL API they imo should more focus on improving it directly instead of creating lots of fancy middleware tools like fxcomposer, rendermonkey or the others which every other company or opensource project could create without problems, but everybody has to suffer from buggy implemented or missing driver and low level features…

GetTexSubImage is there imo only just another example for things going wrong with OpenGL.

I would argument the other way round, if you introduce a feature (independent of the reasons) do it the right way (in this case with the inverse operation).
The point he’s making is this. glTexSubImage has a basic fundamental difference from glTexImage: no memory allocation. As such, there is a very specific need for glTexSubImage.

I find the need for glGetTexSubImage (and, technically, glGetTexImage) to be kinda dubious. First, there are really only 2 ways to get data into a texture: use glTex(Sub)Image or glCopyTex(Sub)Image. If you did a glTex(Sub)Image call, then you had the texture data in main memory, and you should have kept it around yourself if you needed it (especially for performance reasons). If you did a glCopyTex(Sub)Image, then you can do a glReadPixels to read from the framebuffer where you copied the texture image from (in theory, these should perform the same as having glGetTexSubImage).

Yes, as always you can live without this feature, there always exist some ugly work arounds [Roll Eyes] but with such an attitude you only will get so far as no serious competitor exist who supply solutions with less work arounds, hacks and special cases…[/qoute]

I would not call this an ugly workaround. If you find the need to do a “glGetTexSubImage”, then it sounds like you should be keeping a copy of the texture around yourself in memory. This is a performance win, so there’s even more reason to use it. And, if you copied it from the framebuffer, then you could have done a glReadPixels then to get the new data.

[quote]And it is imo also a wrong argumentation for designing libraries/API others should use.
No, actually, it’s a reasonable one, and its one of the reasons why D3D drivers are typically more stable than GL ones. You see, what IHV’s have to implement to make D3D drivers is not the D3D API that you see; they have to only implement a specific subset of functionality. It is up to the D3D runtime (that Microsoft is responsible for writing, testing, and debugging) that converts this driver API into the D3D API. This division of labor means that D3D driver developers don’t have to write a huge amount of code. D3D drivers are purely hardware interface stuff.

By contrast, OpenGL drivers must implement the OpenGL API in its entirety. This means that it is a pretty huge undertaking to develop a GL driver. Why do you think that Intel’s GL drivers suck, while their D3D ones are pretty decent? It’s because writing GL drivers is hard.

Since OpenGL IHVs are responsible for implementing the entire API, we must make sure that the API is not litterred with useless functionality (even though it already is), so as to make their jobs easier and to give us more stable GL implementations.

As to this specific functionality, it’d be dog easy to implement as a wrapper aroung glGetTexImage (and, I mean for driver developers to implement as such). Since there is no groundswell of need for this functionality as of yet, you will find that implementations of glGetTexSubImage will do exactly what you could do on your own. So you haven’t really won anything.

Just because something is a function doesn’t mean it is optimized.

Originally posted by Korval:
The point he’s making is this. glTexSubImage has a basic fundamental difference from glTexImage: no memory allocation. As such, there is a very specific need for glTexSubImage.

Well, then why aren’t they called glInitTexImage and glTexImage? Because there is an additional fundamental difference: access to whole texture memory and to subregions. But this feature is incomplete because there exist no inverse operation for the subregion part.

I find the need for glGetTexSubImage (and, technically, glGetTexImage) to be kinda dubious. First, there are really only 2 ways to get data into a texture: use glTex(Sub)Image or glCopyTex(Sub)Image. If you did a glTex(Sub)Image call, then you had the texture data in main memory, and you should have kept it around yourself if you needed it (especially for performance reasons). If you did a glCopyTex(Sub)Image, then you can do a glReadPixels to read from the framebuffer where you copied the texture image from (in theory, these should perform the same as having glGetTexSubImage).

There exist a third way: render to texture. And the glReadPixel is just a dirty hack. In your application you don’t wan’t to know how and where your texture was created but want to get the data of sub regions.
Plus this is again the kinda dubious solutions orientated argumentation. Provide primitives not solutions.

I would not call this an ugly workaround. If you find the need to do a “glGetTexSubImage”, then it sounds like you should be keeping a copy of the texture around yourself in memory. This is a performance win, so there’s even more reason to use it. And, if you copied it from the framebuffer, then you could have done a glReadPixels then to get the new data.

It is a ugly hack. Using render to texture with several passes into one big texture for debbuging this everything else as glGetTexSubImage is a dirty hack. You don’t want to know where your texture exactly came from during your algorithm, plus with the current pbuffer extension it is not possible to use glReadPixels after you bind the buffer to texture. But apart from the pbuffer issue it would be still a ugly hack because when accessing subregions of your texture you would have to know how your texture was created/used.

No, actually, it’s a reasonable one, and its one of the reasons why D3D drivers are typically more stable than GL ones. You see, what IHV’s have to implement to make D3D drivers is not the D3D API that you see; they have to only implement a specific subset of functionality. It is up to the D3D runtime (that Microsoft is responsible for writing, testing, and debugging) that converts this driver API into the D3D API. This division of labor means that D3D driver developers don’t have to write a huge amount of code. D3D drivers are purely hardware interface stuff.

By contrast, OpenGL drivers must implement the OpenGL API in its entirety. This means that it is a pretty huge undertaking to develop a GL driver. Why do you think that Intel’s GL drivers suck, while their D3D ones are pretty decent? It’s because writing GL drivers is hard.

Since OpenGL IHVs are responsible for implementing the entire API, we must make sure that the API is not litterred with useless functionality (even though it already is), so as to make their jobs easier and to give us more stable GL implementations.

There is absolutely nothing which would prevent IHVs and the ARB to do exactly the same thing like Microsoft with Direct3D/DirectX. If they want to they could create a working group for the medium/high level stuff and provide it to the IHVs. I mean they didn’t need to provide high level stuff like mesh classes or spherical harmonics preprocessors but at least a decent high level GPU API.
Haven’t used directX till now (only browsing the docs) but for my next project I definitively will go with directx to try how it compares to OpenGL. I don’t know if and how much Direct3D is better, but I do know that OpenGL currently suck regarding modern GPU Features (glsl with buggy drivers, improper uniform virtualisation, no fx-format, render to texture). And just like shown in this thread there is no real will in the OpenGL community to provide a good or competitive API because there always exist some dirty hacks, it’s too complicated to do a layered approach like Microsoft, or whatever… There is nothing done until OpenGL is serious behind other APIs.

Just because something is a function doesn’t mean it is optimized.
It should be optimized. Hey, it is the inverse operation to glTexSubImage. Is that accelerated? Then glGetTexSubImage should also. If not every IHV could provide optimized implementation it should be specified to be optional and provide a query method to test it.

There exist a third way: render to texture.
Outside of WGL_ARB_render_texture (which, since the texture can’t retain this information, can’t really be called rendering to a texture), there is no render to texture in GL.

plus with the current pbuffer extension it is not possible to use glReadPixels after you bind the buffer to texture
Really? God, WGL_ARB_render_texture sucks even more than I thought…

I was going to mention that glReadPixels can be used in a Pixel Buffer Object for async reads (and therefore better performance than glGetTex*). However, since you can’t actually use glReadPixels, it’s kind of annoying.

There is absolutely nothing which would prevent IHVs and the ARB to do exactly the same thing like Microsoft with Direct3D/DirectX.
Well, except for money and their own sanity. The ARB, as an organization, doesn’t actually have much money, so they can’t really hire developers to write code for them. The ARB produces a specification; it is up to each IHV as to how to implement it.

Also, the DirectX mechanism has downsides (downsides that Microsoft themselves is moving to correct in DX10). Specifically, because the Microsoft code is responsible for marshalling and so forth, it cannot be easily optimized for specific hardware needs. As such, while D3D applications run well enough, OpenGL applications can often run faster. Batching primitives, for example, is far more important for good performance under D3D than OpenGL.

I do know that OpenGL currently suck regarding modern GPU Features (glsl with buggy drivers, improper uniform virtualisation, no fx-format, render to texture
First, buggy drivers isn’t GL fault (well, technically not. It is the fault of the glslang spec, which is very complex, being a high-level language that has to be built into drivers). And yes, they’re annoying. But they’re slowly improving.

Second, FX isn’t a function of a low-level graphics API. Even in D3D, FX is done via D3DX, which is an extension library built on top of D3D.

As for RTT, I agree fully.

And just like shown in this thread there is no real will in the OpenGL community to provide a good or competitive API because there always exist some dirty hacks
That, I disagree with. There is plenty of will in the GL community to improve OpenGL. You can see my (among others) ranting and raving about the ineptitude of the ARB on the Advanced forum. However, your glGetTexSubImage is far below the realm of a true “need”; it lies in the realm of a “like to have”. Render to texture, instanced mesh rendering, and the like are actual “needs”, and I’ll not have the ARB working on an admittedly trivial extension when they should be working on getting ARB_fbo out the door.

I prefer to pick my battles with the ARB. Rather than bothering them about little issues in the API, I would rather they hear about the big stuff first.

Though perhaps you do make a point that the GL community does accept using one function for another somewhat readily.

It should be optimized. Hey, it is the inverse operation to glTexSubImage. Is that accelerated? Then glGetTexSubImage should also. If not every IHV could provide optimized implementation it should be specified to be optional and provide a query method to test it.
If you have some querry as to the speed of a function, you’re becoming D3D, and GL doesn’t do that.

You have no guarentee that glGetTexImage itself is “optimized” (given what the function has to do, I’m not sure what it would even mean). In fact, as a matter of a general optimization, everyone says to avoid all glGet* calls because they are expected to be slow. As such, if glGetTexSubImage is slower than glGetTexImage, what’s the problem?

You can also create texture data via SGIS_generate_mipmap, which is part of the core as of 1.5. With pbuffers and WGL_ARB_render_texture you can either use the pbuffer as a framebuffer or as a texture, but not both at the same time. The reason for that was so that they didn’t have to worry about the case of rendering to a pbuffer while using it as a texture.

Textures are created and initialized with one case because, quite frankly, doing it with two calls is just stupid. It would be an extra call in the 99.9% common case.

Originally posted by valoh:
[b]Provide primitives not solutions.

That should include not to ask “do we need it?”, “how many need it?” “and what for?” but to provide generic, and wide usable features. Regarding this feature: obviously it was thought access to subregions of texture would be reasonable feature. So why restrict it artificially by only allowing write access? Only because nobody yelled I WANT READ ACCESS?[/b]
Exactly. That’s the secret power of OpenGL : do (i) only what’s possible regarding hardware (current or near-future) and (ii) do only what’s needed.

And if you really want your own new feature, you’re free to write an extension and sumbit it to the OpenGL Extensions Registry. You’ll see if it becomes implemented or not in drivers. And if the extension becomes popular enough, it could even end up in the next GL A-spec.