Can only bind to GL_TEXTURE0 on Windows (Linux works fine)

Elo, I have a problem with my game engine gpu backend on windows only the GL_TEXTURE0 texture unit is bound if i bind to any other texture unit the sampler returns black. It works fine on linux is this a bug in the windows opengl driver? I have tried to use glBindTextureUnit instead of glBindTexture but theres no difference.

Demonstration of the bug: xng/main.cpp at master · vetux/xng · GitHub
Where i bind the textures: xng/oglrenderpass.hpp at 951b85e598f04090a7dff84574befe2247c0974f · vetux/xng · GitHub

No, this almost certainly a bug in your code.

Also, there is no one “windows opengl driver”. There are multiple drivers, typically one (or more) per GPU vendor.

Which GPU and GPU driver vendor/version are you running on?
Dump the GL_RENDERER and GL_VERSION strings you’re getting after creating and binding your GL context.

Browsing your code, my suggestion would be to dig down inside all of this abstraction code you’re using during setup and look at what underlying WGL and GL calls you’re actually making to create and setup your window, GL context, and GL context state when running on Windows. You’re probably missing something.

Also, because it’s a very common mistake for new GL devs, and it’d explain the behavior you’re seeing, show the GL code you’re using to set which texture unit is referenced by each shader sampler uniform (glUniform1i() should sound familiar here). My guess is, you’re missing this. So you’re only getting the default of 0.

I am using layout(binding = X) to assign the texture units to the samplers and the shader buffer bindings i dont use glUniform at all because it is very slow. I am using GLFW to setup the opengl context and window which you can see here. The glString values are:
GL_VENDOR: ATI Technologies Inc.
GL_RENDERER: Radeon RX 580 Series
GL_VERSION: 4.6.14761 Core Profile/Debug Context 30.0.13023.4001

And as i have said on linux with mesa and amgpu all texture units work without any issues so i suspect there is a problem with this opengl driver. The layout(binding=X) seems to be working because i am getting different outputs from the samplers which is black color for anything not bound to 0 and the texture if its bound to the texture unit 0 so it must be a problem with the glBindTexture not connecting the textures with units that arent GL_TEXTURE0.

I assume you mean shader “sampler” bindings not buffer bindings. If so, that should work as well.

You might double-check that you’re declaring the correct sampler types (e.g. sampler2DArray, sampler2DArrayShadow, etc.) in those sampler bindings for texunits >=1, but you probably are.

You also might run this through an OpenGL debugger that can examine the GL API calls made and the GL context state at each draw call … just to make sure you really have the textures you think are bound, bound to the correct texture units with the correct types, and that your GLSL shader code properly references them.

Yeah, not that old (5 years). Hard to envision this being an AMD driver bug.

From your source link:

I haven’t trace all your code, but doubtless you have and can faster than I. So…

One thing you might try is setting your min filter to GL_NEAREST. Thinking about the case where you don’t have MIPs. You could also set BASE_LEVEL and MAX_LEVEL to force-constrain texture sampling to only MIP 0. Or (even better), just to pulling texels from this 2D texture array using texelFetch(). Then you can be sure there’s nothing funny going on with texture filtering.

texelFetch() also has the advantage that you can verify your 2D Texture Array doesn’t just contain black texels in slice 0. If it wasn’t populated correctly, that could be the result of your “black on windows” problem.

Also a useful test: Swap these 2 texture bindings: Put the 2D texarray on texunit 0, and then see if it works or if you still get black. If you get black, then it would suggest the problem isn’t the texture unit, but the texture itself or the method used to sample from the texture.

Also, just so that you are aware: texture2D is a reserved keyword for the Vulkan flavor of GLSL, and it’s also the name of a deprecated function for accessing textures. It may be best to change those names, even if your compiler doesn’t complain.

That’s a good point.

And on the “if i bind to any other texture unit [besides GL_TEXTURE0] the sampler returns black” problem, in combination with the above GLSL snippet…

texture2DArray is the name of a GLSL function defined in the old EXT_texture_array extension. So, same suggestion there: rename.

Who knows what the AMD driver’s GLSL compiler is doing with those old GLSL symbols. If you can dump the generated ASM, you might see what it’s doing with them.

AMD has released new OpenGL driver for Windows last year. There are multiple bugs reported on reddit, etc. (and fixed by AMD) up to this day. So it may be driver’s fault.
Best for you is to compare 22.6.1 driver (the last with the old binaries) with the newest one (23.4.3).
If it works on 22.6.1 then it is new driver’s bug.

fyi: New OpenGL driver introduced in 22.7.1

Oh, I noticed now you have drivers from 2021… => :confused:

This appears to be a bug in the amd driver (Which is the driver microsoft provides by default what a coincidence that it has a broken opengl implementation) with texture bindings i have now updated my graphics drivers and the texture bindings work now as expected.

In the below screenshot you can see that renderdoc complains about binding conflicts in the old driver but with the driver the texture bindings are there as expected.