Architecture choice for drawing text


I can think of three way to draw text in opengl using ttf fonts (in my case I have a ttf decoder so I have access to the raw data)

  1. Have a single texture with all the font glyphs drawn, create quads per letter and draw the quad on the screen
  2. Have a single texture with all the font glyphs drawn, create another texture with all the letters drawn on to that texture. Draw a single quad on the screen per batch of text.
  3. Draw the vector graphics directly directly on the screen and have a shader fill in the letters

Which one is the most efficient was of doing things from your experience?

Depends on what you want to do with it.
If the text is changing all the time, like dynamically updated UI’s, then your 2) option would probably not be the best idea and you should favor the 1). If the text stays mostly the same then the 2) option would make things pretty efficient. Your 3) option would be cheaper on the memory but more expensive on the draw calls. If you already have plenty of high res textures and you are afraid that memory is an issue then it might perhaps be a good idea. Overall I guess most people would use option 1). It is what libraries such as STB are doing.

I would suggest creating a layer of abstraction and having a simplified interface for drawing text which is independent of its implementation. You can then start with the implementation which is easiest to implement (probably your 1) or 2)) and then change that implementation later on if you determine that the performance is not good enough for your needs.

One quad per string will be quicker to draw than one quad per character, but will typically require more texture space. Note that there’s no need to have a texture containing all of the glyphs (for scripts with many glyphs, this may be impractical). For case 1, the texture only needs to contain the glyphs which are actually used. For case 2, you can render the strings directly (this may result in better quality than rendering individual glyphs then constructing the strings from the rendered glyphs, as the spacing isn’t forced to a multiple of the pixel size).

Rendering the geometry directly is the least efficient method. Glyph outlines aren’t generally convex, so you either need to tessellate them or use stencilling.

Rendering the geometry directly is the least efficient method.

And the least aesthetically pleasing method. Glyph rasterizers typically use anti-aliasing of a form that rendered geometry under OpenGL can’t do. This also includes hinting mechanisms that geometry can’t handle.

Thanks everyone for your comments, much appreciated. I currently render only the ascii character to the texture quad to keep it small but useful (although this is fully configurable). At the moment I use method 2), I think I will add the ability to go for method 1) as well so I can choose which suits which situation.