Hi everyone, I’m developing an application, which will render an image of passing route of a car, by processing the image captured by a camera in the front of the car.
In glsl, “texture” function is used to sample pixels of camera images. In the route image rendered straight lines will also be straight, but the objects in the image will become blurrer and blurrer. I think this is because of using of gl_linear by default, while gl_nearest cannot keep straight lines straight. I find using “textureFetch” function will not cause a blurred image, but cannot keep straight lines straight.
Do you have any clue to keep straight lines straight when using “textureFetch” function? Thank you for advance!
The function name should be “texelFetch” which I misspelled. ![]()
I’m not sure what you mean about straight lines. What kind of geometry are you using these textures on? Could you post an image?
The difference between the texture functions are approximately:
- texture() automatically calculates the derivatives to perform texture sampling, based on the texture coordinates you give it. That’s where linear comes into play; it will perform linear sampling because of that. Those derivatives aren’t exactly from the camera, but rather how the values of each fragment varies between their adjacents. It’s somewhat complicated…
- textureGrad() is effectively the same as texture(), and will perform interpolation if you have that enabled, but you have to provide it the derivatives manually instead of it calculating them for you. This is mostly to get around non-uniform flow, which breaks the derivative calculations inherent in texture(). I suggest looking at dFdx and dFdy: https://registry.khronos.org/OpenGL-Refpages/gl4/html/dFdx.xhtml
- I’m going to ignore textureGather() because it’s very specific and not relevant here, I don’t think.
- texelFetch() simply looks up a pixel/texel of the texture attached to the sampler. It does no interpolation, filtering, or anything else. It’s more like looking up the data in a texture as if it were a 2D (or other dimensionality) array. Additionally, the coordinates are integers, not floating point, so if you are switching between texture() and texelFetch(), you will have to adjust for that different. Be aware it’s not necessarily as straight-forward as multiplying by the size, at least in all cases.
I think you potentially have several issues. Are you resizing the image when you render it? If you’re trying to get 1:1 rendering, like an overlay/picture-in-picture, then texelFetch() is definitely the better approach, but both should work if you have the coordinates right. If you’re trying to scale the image, texture() is the preferable way, because texelFetch() will result in a very chunky look.
Note that derivatives are only used for mipmap level selection. If the texture lacks mipmaps or the minification filter doesn’t use them, then derivatives aren’t relevant and ideally wouldn’t be calculated.
Yes, that’s a good point. Based on the description of the texture coming from hardware, presumably in real time, there’s a good chance it doesn’t have mipmaps. That could degrade the quality depending on how it is being used.