Texture-language

What’s about a texture-language to save bandwidth and become resolution-independent? Maybe for OpenGL 3.0…

(At least Stanford-people should think about it :stuck_out_tongue: )

If you’re talking about procedural texturing, why does this need to be separate from the existing fragment shader?

why does this need to be separate from the existing fragment shader?
Not that I’m terribly in favor of this, but there is one good thing about having it be separate: it can run in parallel. If the fragment program doesn’t stall when it asks for texture accesses (as long as you don’t use it immediately), then the texture processor could run in parallel.

Originally posted by Korval:
there is one good thing about having it be separate: it can run in parallel
I get kind of hazy when it comes to hardware implementation, but wasn’t the fragment program API designed to encourage parallel hardware anyway? Given a choice between

a) 1 unit running a fragment shader and 1 unit running a texture shader, and

b) 2 units both running combined fragment/texture shaders, for two fragments, at half the speed per fragment

wouldn’t it be pretty much a wash performance-wise? In which case we might as well keep things simple and generic.

wouldn’t it be pretty much a wash performance-wise?
If you assume any hardware with “texture shaders” would run at half speed. However, there is nothing that says that it must run at half the speed of other hardware.

Originally posted by Korval:
If you assume any hardware with “texture shaders” would run at half speed.
Not quite my point (if I can remember what my point was). As usual the OP didn’t give any indication of what he wanted his texture shader to do, but I can’t imagine it’d be so different from fragment shaders as to justify separate, dedicated hardware. So assumptions about relative speed of fragment/texture are largely irrelevant - what I meant to contrast, and apparently failed miserably, was “2 units each doing N amount of work on the same fragment” vs “2 units each doing 2N amount of work on different fragments”. If every unit takes the same time to do N, it’s a wash. (This might conceivably change if average triangle sizes shrink to or below the 1-pixel mark, so that you might not have two fragments using the same shader to parallelize over, but in that scenario the state change overhead would be so pathological anyway that I doubt you’d notice.)

For significantly complex static procedural textures you’d probably cache them via render-to-texture anyway, of course, at which point the whole parallelism question becomes moot.

Would this be the pack/unpack processors originally proposed for GL2 (by 3DLabs, years ago)?

I quite liked the idea.

Originally posted by zeckensack:
Would this be the pack/unpack processors originally proposed for GL2 (by 3DLabs, years ago)?

That I could maybe see a use for, but since it doesn’t have anything obvious to do with resolution-independence I doubt it’s what the OP had in mind.

I am not a HUGE fan of procedure texture myself because:

  1. it’s only good for a FEW surfaces like wood, water and maybe terrain.
  2. LOD/MIP issue

ALthough there are some “pros”, too. one of the biggy is water rendering.

Let’s dream a bit, it could also enable to implement ‘custom’ texture decompression, like a jpeg/mpeg decoder on the gpu? sounds interresting…