Is GLTexSubImage fast enough for rela-time procedural textures, ala classic Unreal?

I’m a big fan of the procedural texture effects in the old Unreal engine. Is GLTexSubImage fast enough to poke the colors of a 256x256 texture in real time, before each render? This is for an FPS engine, so unless it is very fast, it will slow down when combined with everything else the program loop is doing.

the one to use is glCopyTexSubImage2D

Cool. We’ll see how it goes.

You know, why hasn’t anyone written a DLL to create and update procedural textures automatically in OpenGL? It would be a nice little self-contained routine…or maybe someone has.

glCopyTexSubImage is for copying the framebuffer contents into a texture, and is hardware accellerated on common hw. So the process is to render the image to the framebuffer and then copy it… I think that no additional algorithm is neccessary for this.


No, I meant an algorithm to make an animated procedural texture on-the-fly, for fire, smoke, and water.

Originally posted by halo:
No, I meant an algorithm to make an animated procedural texture on-the-fly, for fire, smoke, and water.

Yeah, you can use register combiners vertex/fragment/pixel programs/shaders or code it in GL shading langage (almost C), and it is all done on the card. See the NVSDK with many ‘procedural’ things. Not for pre-geforce3 though, and better with Radeons 9600 and +.

but there is not THE ONE algorithm for creating procedural textures, as it highliy depends on what you want to do .

For “natural” things like fire, smoke, water, perlin noise is a good choice (see ). But you still have to render it to the framebuffer and do glCopyTexSubImage2D. On Windows, theres something like WGL_RENDER_TEXTURE, but I do not know anything about it as I do not use windows (does anyone?).

Wouldn’t it be faster to just calculate the pixels and poke those directly to a memory buffer?

Store your CPU results in a standard array, and use glTexSubImage tu upload them to the card. It is pretty fast. And you can even tell the card to do the mipmaps itself.

Well sorry, it is glTexSubImage2D (for 2d textures).

And for hardware automatic mipmaps, use :

Of course, you will have to first fully define a texture with glTexImage2D.

(Well, forget, all gltexsub stuff was already said in the first post, sorry again.)

For faster response, you may update only one procedural texture per frame. So if only one is visble, fine, each frame has its updated texture. If 2 procedural textures are visible, update #1, render frame, update #2,render frame.

Hope it helps a bit.

but glCopyTexSubImage2D is the one which is hardware accelerated and so is going to be the fastest way to update textures… however your things have to be rendered. I guess it’s a matter of which case you have, I guess if the things you want to display in the texture can be rendered with OpenGL primitives it wil be faster to do so, rather than calculating pixels yourself. But if you use a perlin noise function for examlpe, the other way might be faster.


If you’re rendering particles to a texture, and then displaying the texture, you may as well just render the particles. I like the old Unreal procedural textures better than using particles, because drawn textures have better blending and “heat” effects, whereas particles tend to just look like a bunch of circles.