seem to lead to the conclusion that I can use texture*lod in a fragment shader. Although, Tom Nuydens is using nVidia hardware (with a less strict glsl compiler) and I am running on ATI; maybe I should stick closer to the specs and “switch on” extention as Mr Bill adviced ? I didn’t really understood how to do it though, so if anyone could clarify this for me please.
Thanks in advance for any hint,
Wizzo, are you able to use the LOD bias? This is available in the fragment shader as the third texture parameter. If you’re using ATI hardware, chances are that you’ll want to stick to the specification as ATI does. I would think Mr Bill’s advice would be good to follow for ATI hardware
If you’re developing on ATI hardware, I don’t believe that texture*lod is implemented in their compiler on the fragment side either (correct me if I’m wrong) so you’ll need an NV card to do this. Furthermore, I’m not sure that this extension is clearly defined so I’m not sure what you’d use for extension_name in your #extension.
Concerning the bias parameter, the specs says
For a fragment shader, if bias is present, it is added to the calculated level of
detail prior to performing the texture access operation.
The problem is that I donc know how to modify the lod at will using the bias parameter
If I write
texture2D(colorMap, texCoord, +2);
assuming that lambda is the lod determined by the fixed pipeline, I’ll have the image determined by lambda + 2, is that correct ?
About the extention I didn’t found anything on the extension registry, so I dont know what to do with them
I think I’ll wait for the extention to come out, because the lod I want to apply has nothing to do with the camera-surface distance, so I don’t really know how to bias it.
(maybe compute my own lod and do some FPlod - Mylod + theLodIwant, hoping I computed the same lod as fixed pipeline :rolleyes: )
thanks for your answers kingjosh
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.