Hi Everyone,
I’m having a hard time figuring out the best way to render textures containing text images.
I’m talking about textured quads or textured cubes with a different text-image mapped on each face.
In particular I don’t know how to handle heavy minification scenarios: I tried trilinear filtering, anisotropic, different lod bias and [min level, max level] configurations without success (almost).
When using point sampling the texture is sharper and somewhat more readable compared to other methods, however there’s a lot of flickering when the textured objects moves.
When using tri/bilinear or anisotropic sampling the result is indeed more pleasing to the eye and the flickering is reduced but the text becomes too blurry to be read.
This is an issue when the approximate screen height of the text is about 5/15 px.
This is an example texture and it’s auto generated mipmaps.
I’m using the alpha channel (here shown as transparency) to interpolate between a background and a text color in my application.
What options do I have to display better texts while still using this same approach (textured quads)?