we are currently running into the GLSL sampler/texture limit per stage which is defined by MAX_TEXTURE_IMAGE_UNITS. This is currently 32 for me, but I am wondering if there is any way to sample more textures.
I already checked into the sampler objects (http://www.opengl.org/registry/specs/ARB/sampler_objects.txt) which allows you to nicely split the sampler states from the texture objects, but there isn’t any counterpart for this on the GLSL side, so I don’t think this will help us increase the limit.
I know on D3D11 there is a split on samplers and textures also in HLSL where you can have upto 16 samplers per stage, and upto 128 textures. Textures are sampled by defining a sampler to use then. I guess nothing like this is available in OpenGL?
I also checked into the NV extension: NV_bindless_texture (http://developer.download.nvidia.com/opengl/specs/GL_NV_bindless_texture.txt), but sadly enough this only works on nVidia Keppler hardware.
I already checked into the sampler objects
As to my knowledge, samplers can be bound to texture units which effectively changes state used in texture look-ups on the texture bound to this unit and target and a shader does not discriminate between sampler and texture object.
This is currently 32 for me, but I am wondering if there is any way to sample more textures.
There are plenty of ways to circumvent the texture unit limit. First, the tried and true texture atlas which combines multiple smaller textures into a single large one. Accessing a single texture is then done using offsets in s and t. Second, you can use texture arrays which increase the number of textures per unit to MAX_ARRAY_TEXTURE_LAYERS. This is more convenient than using a texture atlas since you’ll only have to use an index and sample the returned texture like you would any other texture. Third, do multiple passes where each pass uses the necessary amount of units as long as unit < MAX_COMBINED_TEXTURE_IMAGE_UNITS.
thank you for the reply, but I know about the ways to work around the limitation but all of them have issues:
- Texture Atlas: hard to do good texture wrapping without making the shader expensive and mip maps often look bad. And the data has to be modified for this.
- Texture Arrays: this looks like the best possibility but has an annoying memory layout which is not very efficient for streaming data.
- Multi pass rendering: not easy possible for every technique because of limited blending option. How to properly blend tessellated or POM materials for example? And also adds extra overhead on the code side.
Just more textures samplers in a shader is easier, but if that’s not possible than it will have to be texture arrays I guess.
Just more textures samplers in a shader is easier
There is no way to increase the number of MAX_COMBINED_TEXTURE_UNITS yourself. It’s implementation dependent.
we are currently running into the GLSL sampler/texture limit per stage which is defined by MAX_TEXTURE_IMAGE_UNITS
Wrong. GL_MAX_TEXTURE_IMAGE_UNITS gives the limit for the fragment shader stage only. Each shader stage has a separate query. GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS is for the vertex shader. And so forth.
Sure, on modern hardware, these are almost always the same. But OpenGL does not require this to be so.
Tell me what do you mean here? Texture arrays don’t have any “annoying memory layout”. They are simply arrays of simple textures. The only restriction is that all layers must have the same size, internal format and mipmap level count.
What does prevent you from streaming data efficiently to texture arrays?