How to check texture unit?

Hi!

Is there a way to check if a texture unit is
active and that a texture has been bound to it?

I want to be able to conditionally sample the
texture or set the color to a constant depending
on whether the TU is active or not.

.rex

I am not aware of a way to do that. That doesn’t mean there isn’t one.

My understanding is that any sampler id in a shader enables that texture unit, regardless of what is bound to it. I’ve been able to get some fun infinity effects from the framebuffer by reading from unbound texture units in shaders!!

I think the simplest solution is to pass a Uniform/Attribute of some sort with the configuration that you want the shader to work to. Perhaps encode it in the colour attribute as a value you know you won’t pass as a colour if you are trying to conserve variables.

Personally I’d have two shaders if it’s a simple 1 or 2 texture unit option.

Yes, but keep in mind. Doing “gets” from the driver/gpu is a great way to kill performance. Better to cache this in your app. Nevertheless:

  • glGetIntegerv(GL_ACTIVE_TEXTURE)
  • glGetIntegerv( GL_TEXTURE_BINDING_2D ) # or 1D or 3D or…

I presumed the OP was asking about checking within a shader… Perhaps I was wrong.
But, yes, of course you are correct that it can be done CPU side that way.

Thanks Dark Photon and scratt for responding.

Sorry I didn’t make myself clear: this is checking
from within a GLSL shader.

Currently I’m using another uniform to tell
the fragment shader to sample the texture
or not. I just thought there might be a better
way to directly check against the sampler
unit, thus removing the need for a separate uniform.

.rex

It’s an ‘if’ instruction nevertheless.
I’d suggest the following scheme:

-have a main shader object containing calls to some undefined and pre-declared ‘get_texture(vec2 tc)’ (for example).

-have 2 shader objects that implement get_texture(vec2 tc). The first samples a texture, while the second returns some (uniform) color.

-attach main shader with one of the helper shaders before linking depending on the material you want to render

Thanks, DmitryM.

That’s a nice scheme. Do you know if linking shaders
multiple times per frame will take a long time? I’m
used to using binds (which we already know is something
to reduce via state sorting) but haven’t profiled shader
link times, though I suspect the driver will have to
do quite a bit of work there. I’m trying to understand
the trade-offs here from people who’ve tried this.

.rex

DmitryM may know something I don’t, but for the small memory footprint that a shader takes why not just have one or two shaders that employ the different modes you need and simply enable those selectively CPU side? They could still be stored as “shader modules” (in separate text files) and put together procedurally when you set things up…

For me personally I try to do things in this order for example:
(This is pseudo logic off the top of my head for illustration purposes only.)

  1. Limit any “construction work” to initialisation. (Compile, Link, Create etc.)
  2. Strip as much logic as possible from any shader operation. (ifs, loops etc.)
  3. Reduce bindings as much as possible. (re: Textures, Shaders etc. etc.)

If you are linking shaders, or doing logic in them, this falls into steps 1 & 2
With my suggestion above you are at step 3 straight away. :slight_smile:

That just seems more efficient to me for a very small trade of in storage space. I am not saying DmitryM’s solution isn’t appealing, or even aesthetically more pleasing and perhaps elegant… But I think it has penalties.

I didn’t mention that shaders should be linked each frame. In my engine shader is linked only once, when it’s rendered for the first time using current technique:
http://code.google.com/p/kri/wiki/RenderPipeline
Multiple shaders are stored per material, one for each technique. When the shader is constructed, it attaches objects implementing different material behavior (get_texture, get_bump, get_parallax_offset, etc) according to what material has (depth texture/color, normal map, etc).

The system of small shader objects implementing ‘virtual’ calls of big ones provides you more freedom & flexibility than having a fixed set of complete shader programs.

It’s a possible solution, but I don’t see any advantages. This way you’ll have all shader code recompiled each time the shader is constructed - will result in much longer compile time.
GLSL initially had a support for linking multiple objects that can work effectively if used properly.

The advantage is basically speed. Compile things up front or take the perf hit on the fly. You have precious few options in GLSL currently (though rumour has it that this may change at some point in the future w.r.t. offline compilation…).

I don’t get you. Where the perfomance hit? On the first material usage? It’s not per frame, it’s just once.

don’t listen to me, DmitryM. I’m just a crazy old fool. I don’t understand myself half the time.

[quote=“DmitryM”]

I don’t get you. Where the perfomance hit? On the first material usage? It’s not per frame, it’s just once.[/QUOTE]

Well “just once” might be one time too many… Some applications absolutely cannot afford the performance glitch (up to 100s of ms) caused by linking a new shader. Even when the application domain allows this, I’m confident that the user would rather not have the application glitch each time he uses a new shader combination which has to be compiled.

These type of applications can simulate rendering forcing the creation of new shaders each time a new material appears in the scene.

Moreover, as I suggest to store shader objects (that implement particular routines and are already compiled), the new shader will need to just compile a little root shader object and link the result - it’s cheaper than compile one big shader anyway.

Moreover, as I suggest to store shader objects (that implement particular routines and are already compiled), the new shader will need to just compile a little root shader object and link the result - it’s cheaper than compile one big shader anyway.

No implementation of OpenGL works that way. Most of them work by recompiling the shaders again during the link process. Or at the very least, the shader compile process just creates a parse-tree; it doesn’t do any of the serious optimization work.

This seems to be the main motivation behind the input layout object in DX10, to tie up the single remaining “variable” in the program: input. As long as each shader can be fully compiled, in isolation, based on known inputs and outputs, the bulk of the heavy lifting can be done in advance and the pieces fit together like linkin logs. But without forehand knowledge of input at the VS end, it’s all for bupkis (or so it seems).

2 Alfonse Reinheart:
I’ve made a small test. Your point is confirmed:
linking several pieces actually takes the same time as compiling & linking of big pieces of the same code (25ms on Radeon 2400, Catalyst 9.5).

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.