Texture scaling in OpenGL

Hello
This is my first time posting here, so please bare with me. Now, I have a question about texture scaling in OpenGL. I’d like to know which version of OpenGL introduced the texture/display or framebuffer scaling feature. I hope that I’m asking in the right section and that you will be able to help me.

Cheers -Adam

All versions of OpenGL (and IrisGL before that) support texture mapping. It’s a fundamental feature of the hardware for which those APIs were originally designed. Actually, it’s the fundamental feature.

If you want to display a texture without scaling, you have to make an effort to select the vertex positions and texture coordinates accordingly.

So scaling up an image of a game or something and texture mapping is the same thing? Maybe you didn’t quite get my question, I know that I probably made it seem confusing.

Scaling is a subset of texture mapping. The texture mapping supported by all versions of OpenGL allows for arbitrary projective transformations, i.e.
u = (c11.x+c12.y+c13)/(c31.x+c32.y+c33)
v = (c21.x+c22.y+c23)/(c31.x+c32.y+c33)
A “scaled blit” (without rotation or shear) corresponds to the specific case [c12=0, c21=0, c31=0, c32=0, c33=1], i.e.
u = c11.x+c13
v = c22.y+c23

If you’re referring to the ability to render at a lower resolution then upscale, the way that would be done now (using framebuffer objects) requires OpenGL 3.0 or either the EXT_framebuffer_object or ARB_geometry_shader4 extension. Previously, that would require rendering to some platform-specific off-screen surface (pbuffer, GLXPixmap, etc) then copying the framebuffer contents into a texture which could then be scaled via texture mapping.

Having rendered to a texture or renderbuffer, the result can then be copied to the window with scaling using glBlitFramebuffer (which was added in 3.0 via the EXT_framebuffer_blit extension). Or if rendered to a texture, that can then be rendered to the window using texture mapping.

So, if I have the extensions I can upscale something rendered at a lower resolution even with an OpenGL version older than 3.0 (maybe something like 2.0?)

You can do it with any version, it just gets messy (and probably rather inefficient) if you don’t have framebuffer objects. And video hardware which lacks FBOs is positively ancient at this point (EXT_framebuffer_object dates to 2004, OpenGL 3.0 to 2008).

A “fallback” solution which works on any version is to render the scene into a portion of the back buffer, read the rendered scene into memory with glReadPixels, upload that data to a texture (or multiple textures if the rendered area is larger than the maximum texture size), then use texture mapping to render an upscaled version to the window. But transfers from GPU memory to CPU memory can be slow, so you’d use whatever extensions or platform-specific features are available to keep everything in GPU memory.

So I can do this with any OpenGL version, even older than OpenGL 3.0, it will just be a little inefficient, is that what are you saying?

Not to beat a dead horse, but a GL API which will accomplish both #2 and #3 is glCopyTexSubImage2D. This eliminates the need to read the rendered content back to client memory or to a buffer object as an stopping-off point to populating the texture.

Like the glReadPixels() route, this presumes no MSAA.

Yes.

And possibly messy (in terms of the code required). Prior to 3.0, the maximum texture size wasn’t guaranteed to be any larger than 64x64, in which case you’d have to split the image into tiles. And if you’re using linear filtering, you need to take care to avoid artifacts at the tile boundaries.

Right. I forgot that has existed since 1.0.

So, you first said that I’d need OpenGL 3.0 for this. But that was a mistake, this could be done with OpenGL 1.0, right? I just want to make sure, this got me quite confused. @GClements

No, he didn’t. The “or either” part is there in the text for a reason (though I have no idea how geometry shaders are supposed to help). And as later explained in the thread, you can copy the data in the framebuffer directly to a texture, so you don’t strictly “need” FBOs.

You don’t need 3.0, but the cleanest and most efficient solution is to use FBOs, which require 3.0 or one of the extensions which add FBOs (EXT_framebuffer_object or ARB_framebuffer_object). Failing that, if EXT_framebuffer_blit is supported, rendering to a platform-specific off-screen surface and using glBlitFramebuffer avoids a copy operation (and also avoids issues if the maximum texture size isn’t large enough).

The glFramebufferTexture wiki page lists ARB_geometry_shader4 as the originating extension (and the glFramebufferTexture*D variants redirect there). I thought that was a bit odd so I checked it and it does define glFramebufferTextureARB. But a more thorough check reveals that the function is defined only if FBOs are already provided by another extension or the core version (it’s in that extension due to layered framebuffers).

Oh, and one more thing. I can use framebuffer objects to upscale something rendered at a lower resolution even with OpenGL versions older than 3.0 (in my case I’d use OpenGL 2.0) if I have the extensions you were reffering to, is that right? @GClements

If you have FBOs (via 3.0 or extensions), you can render to them as you would the default framebuffer, then upscale that to the default framebuffer either via glBlitFramebuffer (3.0 or EXT_framebuffer_blit) or by rendering a pair of texture-mapped triangles.

Prior to 3.0, the value of GL_MAX_TEXTURE_SIZE wasn’t required to be any higher than 64 and ARB_framebuffer_object has the same requirement for GL_MAX_RENDERBUFFER_SIZE, but it’s highly unlikely that you’ll encounter hardware with such a small limit. I’ve never encountered hardware with a value less than 256. But if you need to support pre-3.0 hardware, a maximum texture size of 256 is quite possible.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.