Separate sampler state from texture state

Copy texture data?? Why would that ever be required? You can change the texture’s filtering mode/params after creation, you know…
Just create the mipchain, and then set filtering the way you want it for the current material. It takes a few nanoseconds. You can even put only 1 mip in the rare cases, and set the max_level param - this way the texture will be valid for even gl_linear_mipmap_linear.

Of course, you would also multiply texcoords via textureSize() there.

You can change the texture’s filtering mode/params after creation, you know…

Yes. But that changes it for that texture. Thus it is not possible to bind the same texture with different filtering modes.

Also, changing the filtering params involves state changes, which are not the best use of one’s performance. Especially if that texture is being rendered with elsewhere.

>> Thus it is not possible to bind the same texture with different filtering modes.
Completely agreed on. Though, I can’t generally fanthom common-use patterns that can get broken the shader-way. Aniso+bilinear+nearest can easily coexist inside the shader. The unsupportable patterns are:

  • Per-axis different filtering // I don’t see artists doing it
  • aniso+trilinear // why ever

>> changing the filtering params involves state changes, which are not the best use of one’s performance.
IME, you’d be amazed how fast state changes are on modern cards and drivers.

P.S. measured how fast: when I change filtering-modes on every bind-texture (300+ textures, a non-synthetic benchmark), the two calls to glTexParameteri(…,GL_TEXTURE_MIN/MAG_FILTER) take less than total of 57 cycles. That’s 15 nano-seconds on my PC.

By the way, I’m only “opposing” it, so that we as a community discuss this to finest detail, and give everyone a chance to mention what patterns-of-use are really important to them; while filtering-out the patterns that can easily be made possible+performant with the already existing specs.
I really like the current performance and caps, so risking to destabilize it a bit imho should be done only for a valid purpose. The reason should never be the premise of false progress, vague ideas and misinformation.

First, GL objects are merely a way to organize and encapsulate states, and they have noting to do with how hardware works…

Direct state access is nice to have feature, but this will further complicate the GL implementation if we have to do both direct and indirect things. thanks to deprecation mode. :slight_smile:

Don’t care about OpenGL implementations. It’s IHVs’ responsibility to hire experienced enough employees to take care of implementing drivers efficiently and robustly. All issues with OpenGL implementations are mostly due to poor testing. FWIW the ARB would not incorporate something into OpenGL that cannot be done easily. So far, even with plethora of extensions, all implementations can somewhat keep pace with the standard. Even the Mesa open source driver architecture is getting some of GL3.x features and others are being worked on (e.g. geometry shaders), all done just by a handful of people scattered around the world.

Why would someone need to change sampler state in a shader? What DX10 offers is great flexibility but I don’t need it.

I agree with what the OP posted because once someone needed to sample the same texture (it was a depth texture) twice in the same shader : 1 with COMPARE_MODE on and 2 with COMPARE_MODE as GL_NONE because he wanted the actual values.
I don’t remember what he intended to do with it.
Seemed like a legit need.

Even the Mesa open source driver architecture is getting some of GL3.x features and others are being worked on (e.g. geometry shaders), all done just by a handful of people scattered around the world.

The features we ask for is simple but since something like nvidia’s code or ati’s code is so heavy, small changes become a big job. Yes I know, some nvidia guy came here and said he doesn’t care and nvidia can handle just about anything we can throw at them.

However, we won’t need a separate sampler state object if it’s possible to do the same thing in a shader. In this case, deprecating the glTexParameter* functions is sufficient.

The idea is to allow for per fragment sampler changes, there is a real benefit to be able to select aniso level or filtering method based on depth, importance, visibility, speed and where on the screen it is.

Like if you where to add a post blur filter to a frame then would you need anything other than GL_NEAREST for objects that are blurred the most.

FWIW the ARB would not incorporate something into OpenGL that cannot be done easily.

GLSL. Nothing about this is easy.

Need wise

Where I expect to have such feature being really great it’s for depth of field and other blurry things!

Sharpe area: ANISO 16X
Blurred area: maybe down to nearest!

In a single pass, according how sharpe a fragment is supose to but, select in the fragment shader the right sample.

With transparency / fog, it could be great as well…
A scenario for example would be a terrain with parts underwater. The water is transparent but when it’s depth enough we don’t see the terrain anymore:
For terrain fragments over the water “plan” trilinear aniso 16x.
For terrain fragments under the water “plan” trilinear / bilinear.

For fog, the anisotropic filtering could be progressive.

“Programmable filtering” could be the future: texelFetch & textureQueryLOD would make is possible be still not as efficiently as fixed pipeline filtering. Plus it requires D10.1 level hardware when I guest a sampler object would works up to D3D9 level hardware. (GeForce 5 / Radeon 9***).

For reflexion/reflaction/mirrors/environment map it would be interesting to study the idea of having a single image and being able to switch between sampler.

One other big topic with sampler objects, it’s texture alias. Being able to use different filtering for different part of a the texture would be nice even if I don’t really know how this would should be done to be wise. But typically, a character texture with the hair, the skin, the cloth (the various material on the cloth!)

Well, and so on an so on!

API wise

Do you think guys that glUniformi(Uniform, 0) is really required for some reason? And why? When I see the binding slots of the uniform block API, I think that maybe it is … alls.

Instead of sampler I will use “filter” because of GLSL “sampler”.

Hop, just for fun: the API of GL_COMMINUTY_filter_object extension: :smiley:

Following uniform block API it give us:


glBindFilter(0, FilterName)
glUniformFilter(ProgramName, Uniform, 0); 

The filter object will include:
GL_TEXTURE_MIN_FILTER, GL_TEXTURE_MAG_FILTER, GL_TEXTURE_MIN_LOD, GL_TEXTURE_MAX_LOD, GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_WRAP_R, GL_TEXTURE_BORDER_COLOR

And the following could stay exclusively in glTexParameter.
GL_TEXTURE_BASE_LEVEL, GL_TEXTURE_MAX_LEVEL, GL_TEXTURE_COMPARE_MODE, GL_TEXTURE_COMPARE_FUNC, GL_DEPTH_TEXTURE_MODE, or GL_GENERATE_MIPMAP.

This filter object may just be an “alternative” so that we could still use the glTexParameter way by default but also use custom samplers on top of that which would make the easier the feature integration and interaction with other feature. No change on the framebuffer object API or any feature using textures.

On OpenGL API side it could look like this:


void GenFilters(sizei n, uint *filters)
void DeleteFilters(sizei n, uint *filters)
void BindFilter(enum target, uint index, uint filter)
void UniformFilter(uint program, uint filterIndex, uint uniformFilterBinding) 
void FilterParameterf(enum target, enum pname, float param)
void FilterParameteri(enum target, enum pname, int param)

with pname being:


GL_TEXTURE_MIN_FILTER, GL_TEXTURE_MAG_FILTER, GL_TEXTURE_MIN_LOD, GL_TEXTURE_MAX_LOD, GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, GL_TEXTURE_WRAP_R, GL_TEXTURE_BORDER_COLOR.

On GLSL side it more complicated but still:

I’m actually not sure about weither of not the filter*Shadow are really making sens … Actually, now I believe that it doesn’t.

New types:


filter1D, filter2D, filter3D, 
filter2DRect, filterCube
filter1DShadow, filter2DShadow, filter2DRectShadow, filterCubeShadow 
filter1DArray, filter2DArray, filter1DArrayShadow, filter2DArrayShadow, 

New functions:


gvec4 textureFilter (gsampler1D sampler, filter1D filter, float P [, float bias] )
gvec4 textureFilter (gsampler2D sampler, filter2D filter, vec2 P [, float bias] )
gvec4 textureFilter (gsampler3D sampler, filter3D filter, vec3 P [, float bias] )
gvec4 textureFilter (gsamplerCube sampler, filterCube filter, vec3 P [, float bias] )
float textureFilter (sampler1DShadow sampler, filter1DShadow filter, vec3 P [, float bias] )
float textureFilter (sampler2DShadow sampler, filter2DShadow filter, vec3 P [, float bias] )
float textureFilter (samplerCubeShadow sampler, filterCubeShadow filter, vec4 P [, float bias] )
gvec4 textureFilter (gsampler1DArray sampler, filter1DArray filter, vec2 P [, float bias] )
gvec4 textureFilter (gsampler2DArray sampler, filter2DArray filter, vec3 P [, float bias] )
float textureFilter (sampler1DArrayShadow sampler, filter1DArrayShadow filter, vec3 P [, float bias] )
float textureFilter (sampler2DArrayShadow sampler, filter2DArrayShadow filter, vec4 P)
gvec4 textureFilter (gsampler2DRect sampler, filter2DRect filter, vec2 P)
float textureFilter (sampler2DRectShadow sampler, filter2DRectShadow filter, vec3 P)
gvec4 textureFilterProj (gsampler1D sampler, filter1D filter, vec2 P [, float bias] )
gvec4 textureFilterProj (gsampler1D sampler, filter1D filter, vec4 P [, float bias] )
gvec4 textureFilterProj (gsampler2D sampler, filter2D filter, vec3 P [, float bias] )
gvec4 textureFilterProj (gsampler2D sampler, filter2D filter, vec4 P [, float bias] )
gvec4 textureFilterProj (gsampler3D sampler, filter3D filter, vec4 P [, float bias] )
float textureFilterProj (sampler1DShadow sampler, filter1DShadow filter,  vec4 P [, float bias] )
float textureFilterProj (sampler2DShadow sampler, filter2DShadow filter, vec4 P [, float bias] )
gvec4 textureFilterProj (gsampler2DRect sampler, filterRect filter, vec3 P)
gvec4 textureFilterProj (gsampler2DRect sampler, filterRect filter, vec4 P)
float textureFilterProj (sampler2DRectShadow sampler, filter2DRectShadow filter, vec4 P)

gvec4 textureFilterLod (gsampler1D sampler, filter1D filter, float P, float lod)
gvec4 textureFilterLod (gsampler2D sampler, filter2D filter, vec2 P, float lod)
gvec4 textureFilterLod (gsampler3D sampler, filter3D filter, vec3 P, float lod)
gvec4 textureFilterLod (gsamplerCube sampler, filterCube filter, vec3 P, float lod)
float textureFilterLod (sampler1DShadow sampler, filter1DShadow filter,  vec3 P, float lod)
float textureFilterLod (sampler2DShadow sampler, filter2DShadow filter, vec3 P, float lod)
gvec4 textureFilterLod (gsampler1DArray sampler, filter1DArray filter, vec2 P, float lod)
gvec4 textureFilterLod (gsampler2DArray sampler, filter2DArray filter, vec3 P, float lod)
float textureFilterLod (sampler1DArrayShadow sampler, filter1DArrayShadow filter,  vec3 P,
float lod)

gvec4 textureFilterOffset (gsampler1D sampler, filter1D filter, float P, int offset [, float bias] )
gvec4 textureFilterOffset (gsampler2D sampler, filter2D filter, vec2 P, ivec2 offset [, float bias] )
gvec4 textureFilterOffset (gsampler3D sampler, filter3D filter, vec3 P, ivec3 offset [, float bias] )
gvec4 textureFilterOffset (gsampler2DRect sampler, filter2DRect filter, vec2 P, ivec2 offset )
float textureFilterOffset (sampler2DRectShadow sampler, filter2DRectShadow filter, vec3 P, ivec2 offset )
float textureFilterOffset (sampler1DShadow sampler, filter1DShadow filter, vec3 P, int offset [, float bias] )
float textureFilterOffset (sampler2DShadow sampler, filter2DShadow filter, vec3 P, ivec2 offset [, float bias] )
gvec4 textureFilterOffset (gsampler1DArray sampler, filter1DArray filter, vec2 P, int offset [, float bias] )
gvec4 textureFilterOffset (gsampler2DArray sampler, filter2DArray filter, vec3 P, ivec2 offset [, float bias] )
float textureFilterOffset (sampler1DArrayShadow sampler, filter1DArrayShadow filter, vec3 P, int offset [, float bias] )

gvec4 textureFilterProjOffset (gsampler1D sampler, filter1D filter, vec2 P, int offset [, float bias] )
gvec4 textureFilterProjOffset (gsampler1D sampler, filter1D filter, vec4 P, int offset [, float bias] )
gvec4 textureFilterProjOffset (gsampler2D sampler, filter2D filter, vec3 P, ivec2 offset [, float bias] )
gvec4 textureFilterProjOffset (gsampler2D sampler, filter2D filter, vec4 P, ivec2 offset [, float bias] )
gvec4 textureFilterProjOffset (gsampler3D sampler, filter3D filter, vec4 P, ivec3 offset [, float bias] )
gvec4 textureFilterProjOffset (gsampler2DRect sampler, filter2DRect filter, vec3 P, ivec2 offset )
gvec4 textureFilterProjOffset (gsampler2DRect sampler, filter2DRect filter, vec4 P, ivec2 offset )
float textureFilterProjOffset (sampler2DRectShadow sampler, filter2DRectShadow filter, vec4 P, ivec2 offset )
float textureFilterProjOffset (sampler1DShadow sampler, filter1DShadow filter, vec4 P, int offset [, float bias] )
float textureFilterProjOffset (sampler2DShadow sampler, filter2Dshadow filter, vec4 P, ivec2 

gvec4 textureFilterLodOffset (gsampler1D sampler, filter1D filter, float P, float lod, int offset)
gvec4 textureFilterLodOffset (gsampler2D sampler, filter2D filter, vec2 P, float lod, ivec2 offset)
gvec4 textureFilterLodOffset (gsampler3D sampler, filter3D filter, vec3 P, float lod, ivec3 offset)
float textureFilterLodOffset (sampler1DShadow sampler, filter1DShadow filter,  vec3 P, float lod, int offset)
float textureFilterLodOffset (sampler2DShadow sampler, filter2DShadow filter, vec3 P, float lod, ivec2 offset)
gvec4 textureFilterLodOffset (gsampler1DArray sampler, filter1DArray filter, vec2 P, float lod, int offset)
gvec4 textureFilterLodOffset (gsampler2DArray sampler, filter2DArray filter, vec3 P, float lod, ivec2 offset)
float textureFilterLodOffset (sampler1DArrayShadow sampler, filter1DArrayShadow filter, vec3 P, float lod, int offset)

gvec4 textureFilterProjLod (gsampler1D sampler, filter1D filter, vec2 P, float lod)
gvec4 textureFilterProjLod (gsampler1D sampler, filter1D filter, vec4 P, float lod)
gvec4 textureFilterProjLod (gsampler2D sampler, filter2D filter, vec3 P, float lod)
gvec4 textureFilterProjLod (gsampler2D sampler, filter2D filter, vec4 P, float lod)
gvec4 textureFilterProjLod (gsampler3D sampler, filter3D filter, vec4 P, float lod)
float textureFilterProjLod (sampler1DShadow sampler, filter1DShadow filter, vec4 P, float lod)
float textureFilterProjLod (sampler2DShadow sampler, filter2DShadow filter, vec4 P, float lod)

gvec4 textureFilterProjLodOffset (gsampler1D sampler, filter1D filter, vec2 P, float lod, int offset)
gvec4 textureFilterProjLodOffset (gsampler1D sampler, filter1D filter, vec4 P, float lod, int offset)
gvec4 textureFilterProjLodOffset (gsampler2D sampler, filter2D filter, vec3 P, float lod, ivec2 offset)
gvec4 textureFilterProjLodOffset (gsampler2D sampler, filter2D filter, vec4 P, float lod, ivec2 offset)
gvec4 textureFilterProjLodOffset (gsampler3D sampler, filter3D filter, vec4 P, float lod, ivec3 offset)
float textureFilterProjLodOffset (sampler1DShadow sampler, filter1DShadow filter, vec4 P, float lod, int offset)
float textureFilterProjLodOffset (sampler2DShadow sampler, filter2DShadow filter, vec4 P, float lod, int offset)

gvec4 textureFilterGrad (gsampler1D sampler, filter1D filter,  float P, float dPdx, float dPdy)
gvec4 textureFilterGrad (gsampler2D sampler, filter2D filter,  vec2 P, vec2 dPdx, vec2 dPdy)
gvec4 textureFilterGrad (gsampler3D sampler, filter3D filter, vec3 P, vec3 dPdx, vec3 dPdy)
gvec4 textureFilterGrad (gsamplerCube sampler, filterCube filter, vec3 P, vec3 dPdx, vec3 dPdy)
gvec4 textureFilterGrad (gsampler2DRect sampler, filter2DRect filter, vec2 P, vec2 dPdx, vec2 dPdy)
float textureFilterGrad (sampler2DRectShadow sampler, filter2DRectShadow filter, vec3 P, vec2 dPdx, vec2 dPdy)
float textureFilterGrad (sampler1DShadow sampler, filter1DShadow filter, vec3 P, float dPdx, float dPdy)
float textureFilterGrad (sampler2DShadow sampler, filter2DShadow filter, vec3 P, vec2 dPdx, vec2 dPdy)
float textureFilterGrad (samplerCubeShadow sampler, filterCubeShadow filter, vec4 P, vec3 dPdx, vec3 dPdy)
gvec4 textureFilterGrad (gsampler1DArray sampler, filter1DArray filter, vec2 P, float dPdx, float dPdy)
gvec4 textureFilterGrad (gsampler2DArray sampler, filter2DArray filter,  vec3 P, vec2 dPdx, vec2 dPdy)
float textureFilterGrad (sampler1DArrayShadow sampler, filter1DArrayShadow filter, vec3 P, float dPdx, float dPdy)
float textureFilterGrad (sampler2DArrayShadow sampler, filter2DArrayShadow filter, vec4 P, float dPdx, float dPdy)

gvec4 textureFilterGradOffset (gsampler1D sampler, filter1D filter, float P, float dPdx, float dPdy, int offset)
gvec4 textureFilterGradOffset (gsampler2D sampler, filter2D filter, vec2 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
gvec4 textureFilterGradOffset (gsampler3D sampler, filter3D filter, vec3 P, vec3 dPdx, vec3 dPdy, ivec3 offset)
gvec4 textureFilterGradOffset (gsampler2DRect sampler, filter2DRect filter, vec2 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
float textureFilterGradOffset (sampler2DRectShadow sampler, filter2DrectShadow filter, vec3 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
float textureFilterGradOffset (sampler1DShadow sampler, filter1DShadow filter, vec3 P, float dPdx, float dPdy, int offset )
float textureFilterGradOffset (sampler2DShadow sampler, filter2DShadow filter, vec3 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
float textureFilterGradOffset (samplerCubeShadow sampler, filterCubeShadow filter, vec4 P, vec3 dPdx, vec3 dPdy, ivec2 offset)
gvec4 textureFilterGradOffset (gsampler1DArray sampler, filter1DArray filter, vec2 P, float dPdx, float dPdy, int offset)
gvec4 textureFilterGradOffset (gsampler2DArray sampler, filter2DArray filter, vec3 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
float textureFilterGradOffset (sampler1DArrayShadow sampler, filter1DArrayShadow filter, vec3 P, float dPdx, float dPdy, int offset)
float textureFilterGradOffset (sampler2DArrayShadow sampler, filter2DArrayShadow filter, vec4 P, vec2 dPdx, vec2 dPdy, ivec2 offset)

gvec4 textureFilterProjGrad (gsampler1D sampler, filter1D filter, vec2 P, float dPdx, float dPdy)
gvec4 textureFilterProjGrad (gsampler1D sampler, filter1D filter, vec4 P, float dPdx, float dPdy)
gvec4 textureFilterProjGrad (gsampler2D sampler, filter2D filter,  vec3 P, vec2 dPdx, vec2 dPdy)
gvec4 textureFilterProjGrad (gsampler2D sampler, filter2D filter,  vec4 P, vec2 dPdx, vec2 dPdy)
gvec4 textureFilterProjGrad (gsampler3D sampler, filter3D filter,  vec4 P, vec3 dPdx, vec3 dPdy)
gvec4 textureFilterProjGrad (gsampler2DRect sampler, filter2DRect filter, vec3 P, vec2 dPdx, vec2 dPdy)
gvec4 textureFilterProjGrad (gsampler2DRect sampler, filter2DRect filter, vec4 P, vec2 dPdx, vec2 dPdy)
float textureFilterProjGrad (sampler2DRectShadow sampler, filter2DRectShadow filter, vec4 P, vec2 dPdx, vec2 dPdy)
float textureFilterProjGrad (sampler1DShadow sampler, filter1DShadow filter, vec4 P, float dPdx, float dPdy)
float textureFilterProjGrad (sampler2DShadow sampler, filter2DShadow filter, vec4 P, vec2 dPdx, vec2 dPdy)

gvec4 textureFilterProjGradOffset (gsampler1D sampler, filter1D filter, vec2 P, float dPdx, float dPdy, int offset)
gvec4 textureFilterProjGradOffset (gsampler1D sampler, filter1D filter, vec4 P, float dPdx, float dPdy, int offset)
gvec4 textureFilterProjGradOffset (gsampler2D sampler, filter2D filter,  vec3 P, vec2 dPdx, vec2 dPdy, vec2 offset)
gvec4 textureFilterProjGradOffset (gsampler2D sampler, filter2D filter, vec4 P, vec2 dPdx, vec2 dPdy, vec2 offset) 
gvec4 textureFilterProjGradOffset (gsampler2DRect sampler, filter2DRect filter, vec3 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
gvec4 textureFilterProjGradOffset (gsampler2DRect sampler, filter2DRect filter, vec4 P, vec2 dPdx, vec2 dPdy, ivec2 offset)
float textureFilterProjGradOffset (sampler2DRectShadow sampler, filter2DrectShadow filter, vec4 P,
vec2 dPdx, vec2 dPdy, ivec2 offset)
gvec4 textureFilterProjGradOffset (gsampler3D sampler, filter3D filter, vec4 P, vec3 dPdx, vec3 dPdy, vec3 offset)
float textureFilterProjGradOffset (sampler1DShadow sampler, filter1DShadow filter, vec4 P, float dPdx, float dPdy, int offset)
float textureFilterProjGradOffset (sampler2DShadow sampler, filter2DShadow filter, vec4 P, vec2 dPdx, vec2 dPdy, vec2 offset)

@Ilian Dinev

By the way, I’m only “opposing” it, so that we as a community discuss this to finest detail

Is this reply to your expectations? :smiley:
More comments are obviously welcome!

Yes, keep it up guys ! I’m happy to see a lot of constructive ideas and new important patterns-of-use.

New functions:

This is so beyond necessary. Filters should be attached to samplers. You use a function to set a filter on a sampler, and you’re done, or the sampler constructor simply has filter parameters. That’s it.

The multiplicative explosion of texturing functions in GLSL is already huge. We don’t need another dimension of texture access when we can just set it on the sampler itself.

On OpenGL API side, there is no such thing called sampler… Do you mean a texture object? You attach the filter object to the texture object? How do you expect to add multiple filter to a single texture object then? And how do you expect to expose these multiple filter on GLSL side?

If the idea goes back to just create a sampler/filter object and being allowed to attach a single filter per texture, I think such feature have no point.

I completely agree that GLSL texture function count explosion is an issue.

I guest I finally see what you mean …


glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, TextureName); 
glBindSampler(GL_TEXTURE_2D, SamplerName0);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, TextureName); 
glBindSampler(GL_TEXTURE_2D, SamplerName1);

Would actually do the trick!

However it doesn’t fix the 32 textures limitation which would actually really become an issue with such feature!

On OpenGL API side, there is no such thing called sampler…

I’m talking about the GLSL functions. You can simply do this:


sampler2D myTexture1([i]filer params[/i]);
sampler2D myTexture2([i]filer params[/i]);

You don’t need to pass a filter object in to texture functions. All that does is needlessly create more texture functions.

However it doesn’t fix the 32 textures limitation which would actually really become an issue with such feature!

That’s a different issue. That’s the issue of how you attach a texture object to a program object.

“params” to GLSL sampler … nice I like such idea!
I actually wonder if such idea would be possible but it sound great!

One question that hasn’t been dealt with in this discussion is this: what state is appropriate for the sampler objects and what state is not?

It clearly makes sense for the Mag/Min filter state to be in the sampler. Just as clearly, it does not make sense for the Base Level/Max Level for mipmaps settings to be there; those are a fundamental part of the texture object itself.

However, what about these:

1: Wrap S/T/R

2: Max/Min LOD (different from the base/max level)

3: Lod Bias

I’m for all of them (filter, wrap, lod) as sampler state.

Samplers are kind of a role (from the view of a shader), the textures are actors that ‘play’ this role. The same texture should be able to be used differently with each sampler.

LOD settings seem to be a corner case, though. I did not use min/max lod at all in the recent years, lod bias very rare. Former extension specs offered a lod bias per texture unit and per texture object (they were added together). Maybe it is useful to keep this(?). On the other hand, we can already ‘emulate’ per-sampler-state-lod-bias by providing lod-bias to the various textureXYZ() functions, so another bias in the sampler itself is not really needed.

I like the idea about to have the possibilty to use different samplers on the same texture unit :slight_smile:

I know that this can be easily handle with multiples textures, but it’s true that the possiblility to limit the number of used textures/texture units can be very important
=> I have already encounter the problem when I have wanted the possibility to blend two YCbCr textures on 4:2:0 format with an hardware that can only handle two textures units …
==> now, I use only two textures units and the 4:2:0 and planar formats are directly and easily handle into the shader :slight_smile:

But I have problem for handle something that is near to the DXT format and that use YUV data (that can and have to be interpolated) and indices data (that cannot be interpolated) into the same (tiled) texture.
=> I have to use 4 texture units for this (only two if I want only handle one texture but four if I want blend two textures together)
==> but want only use two texture units for the blending of two textures :slight_smile:
(cf. use two “logicals textures” per “physical texture” seem logic for to blend two texture together, but not four “logicals textures” … :slight_smile: )

And about the wrapping, cannot this to be easily handled by something like (x%width,y%height) into shaders ?

@+
Yannoo