I don’t know what the Wikipedia concept is; it seems to be talking about specific pieces of hardware. I can only explain what OpenGL says.
OpenGL does not define any such thing as a “texture mapping unit”.
The definition of every word is ultimately circular.
What makes you think you have 4 of them?
So what if it violates some arbitrary rule someone invented. Hardware has limitations, and OpenGL is a hardware abstraction. Therefore, it exposes those limitations to the user, so that the user can work around them.
OK, here’s how things stand.
You can either learn it as is, or you can whine and complain on a forum. But you are not changing it. OpenGL has been around for almost twenty-five years now, and for most of those years, the first texture unit is accessed by the enumerator named “GL_TEXTURE0”. Some guy on a forum is not suddenly going to cause everyone to go, “Hey, he’s right, let’s all rewrite millions of lines of code and change this.”
If the particular names of things truly bother you that much, it’s just a number; you can call it whatever you want. You can even #define sampler2D as something else in GLSL.
But don’t expect the rest of the world to indulge you or change 20+ years of things on your say-so.
“Texture image units” are numbered locations in the OpenGL context to which textures can be bound. In any rendering command, whatever textures were bound to texture image units can be access by any shader stage in that rendering operation. Any textures not bound at the time of the rendering command cannot be accessed by the texture.
A texture image unit doesn’t “do” anything. It’s just an element in an array of bound textures. But those array elements are referenced in shaders. The shader says “fetch me this sample from the 2D texture currently in texture image unit 2”. And the 2D texture bound to texture image unit 2 will have a sample fetched from it.
The GLSL sampler type is simply a placeholder. [It’s an opaque type](https://www.opengl.org/wiki/Opaque Type) that represents a resource that exists outside of the shader. It does not have a “value” in the traditional sense. However, it is a uniform and its “value” must be set, either in the shader (via the layout binding syntax) or in OpenGL code via glUniform1i.
The “value” of a sampler is the texture image unit that it represents. So in the above case, you would use glUniform1i and set the sampler uniform’s value to 2 (or use
layout(binding = 2)).
I see it as a GLSL compiler issue since the programmer does not care which Texture Image Unit [choose your name of choice]is employed since they are identical he/she merely cares that a reference made in OpenGL code can be identified in the GLSL code. A compilers job is to figure out things such as which registers to use, and not bother the programmer. GLSL is not assembly language.
It’s not that simple, because GLSL does not exist in a vacuum.
If GLSL automatically assigned texture image unit indices to samplers… how would the OpenGL code that issues the rendering command know which TIU had been assigned to a particular sampler?
Oh sure, you could query the value from GLSL. But that’s stupid, because it means that every time you change shaders, you must rebind all your textures. If you’re rendering with a shadow map, almost every object in the scene will use the same shadow map. So why not have the shader for every object in the scene get its shadow map from the same TIU index?
But if GLSL arbitrarily did TIU assignment, you wouldn’t be able to ensure that the shadow map was a particular index. The way it is now, you can ensure that a particular index is used. You can even establish TIU conventions: unit 0 is the diffuse texture, unit 1 is the normal map, unit 2 is whatever, unit 10 is the shadow map, and so forth.
With your way, you couldn’t do that, since every shader would have its own TIU assignments.
So no, the programmer very much does care which TIUs are assigned to which samplers.