I was wondering if the textureLOD command in GLSL is supposed to perform linear filtering? I noticed that on AMD hardware it does not appear to, but I did not notice this on Intel or Nvidia hardware. The spec is not clear on the matter:
Section 8.9 of the GLSL spec (my emphasis):
Texture properties such as size, pixel format, number of dimensions, filtering method, number of mipmap levels, depth comparison, and so on are also defined by OpenGL API calls. Such properties are taken into account as the texture is accessed via the built-in functions defined below.
(Obvious exceptions being fetch and gather, of course).
It’s much more probable that there is a bug in your code that different vendors behave differently with.
It is (assuming that linear filtering is enabled for the texture). The only difference between textureLod() and texture() or textureGrad() should be that textureLod() takes the level-of-detail as an explicit parameter while the others compute it from the derivatives.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.