Rendering Text on OpenGL Canvas

Hello everyone,

I am currently looking into rendering some text on my openGL canvas. For reference, my canvas has the ability to zoom (through glScale) and pan

Skimming though the material a little bit, I think that I found a decent tutorial that will get me off the ground:!In-Practice/Text-Rendering

Although, I am more then open for any recommendations on tutorials that I should look at that offer guidance on text rending.

Ultimately, I would like to render the text at a fixed size/font that is completely independent of the scale. For example, when I have a rectangle drawn using glVertex2d (and GL_POINTS), the rectangle is able to keep the same size no matter how much I zoom in or out.

In a program that I like to use a lot, they are drawing text on a canvas (as a side note, this program is not using opengl. Instead, they are using directx). Again, the text is able to stay the same size and font no matter how much I zoom in/out in the program.

For those with some experience in rendering text, is something like this possible? Is there a function (or set of functions) in OpenGL that will be able to render the text at a fixed size independent of the zoom level? If so, could you point to me some resources that I can start taking a look at to achieve this?

Any help is appreciated and please let me know if I can clarify anything for you. thank you

OpenGL doesnt know anything about text. If you want to draw text the most common way is to render a textured quad with the texture containing the text (or sometimes individual letters and you render a row of quads). There are many libraries that can create these textures on the fly from .ttf files. A well known library is Free Type.

Which polygons get scaled and which do not is up to you. You can scale your projection matrix for a zoom effect, draw all the polygons that are supposed to be zoomed, and then reset your projection matrix to render your text without the zoom.

Rendering text with OpenGL boils down to using some other library to render either characters or larger units of text into a bitmap, uploading the bitmap to a texture, then rendering primitives using that texture.

For rendering glyphs (characters), use FreeType. For constructing larger units of text from individual glyphs, you can either do it yourself (for scripts with relatively simple layout rules, e.g. Latin or Cyrillic using only pre-composed characters), or use Pango (which can handle more complex scripts, composing characters, etc).

THe link of the tutorial that I posted does use freetype. I am not trying to do anything fancy with text. In the program that I a writing, I have a block and the user can set the name of the block to be whatever. Everything is in English so I am going to be rendering everything that you see on a standard key board. The color of the text will be straight black. I really don’t need anything fancy to go along with this. But I also don’t want to use something that will need alot of resources to render each character.


So I am rendering strings on the openGL canvas. Do you recommend that I use FreeType or Pango? I am looking for a relative simple solution for me to do what I described above.

If I understand your explanation, I first scale the projection matrix, draw all of my geometry that I want scaled, reset the project matrix back and then draw the text. (I sometimes repeat things to make sure that i understand what someone is trying to tell). Is this correct? For my zooming code, I have been working with the projection matrix on that

what do you mean by “alot of resources” ?
the way shown in as well as shown here uses only 1 texture, the resolution is up to you (the higher it is, the better the quality for scaled characters), and 1 program object to shade the text onto the screen

the 2nd tutorial link has a tool linked which can generate such “bitmap textures”, they also give you a certain textfile in which you can read all the different characters width / height values so that you can correctly get the part of the image covering a certain character, for example:

char mychar = 'X';
int row = (int)mychar / 16;
int column= (int)mychar % 16;
/* there are 16 x 16 = 256 chars total */

now you know the correct part of the texture for “mychar”, the offset texture coordinate
then you have to get the width / height from “mychar” somehow into your shader, 1 way is to put all these value (read from the output textfile from that tool) into a buffer and bind it to a uniform block for your shader

you dont need “view” or “projection” matrices for the 2D text, just give the program a “vec2 offset” where you want the text to appear

If you’re limited to English, then you can just use FreeType to render either individual glyphs or strings. A higher-level library such as Pango becomes necessary for languages which can’t easily be written using the “typewriter algorithm” (print character, move right (or left) by width of character, print next character, and so on).

If you’re going to have a relatively-small, fixed set of strings, you may as well store complete strings in the texture, so that you only need to display one quad (two triangles) per string.

If you have a large number of strings, or strings which change frequently, it’s better to render the individual glyphs and have one quad per character.

FreeType will render a glyph at a given resolution into a bitmap. This can be either one bit per pixel, or more (usually 8 bits) if you want anti-aliasing. In legacy OpenGL, you’d typically use a GL_ALPHA or GL_LUMINANCE texture format so that you can change the colour dynamically.

There are better approaches for rendering text (e.g. distance maps allow high levels of magnification without visible pixellation or blurring), but they’re more complex.

Hello everyone,

A small update. I am currently continuing to learn how to render the characters with the freetype2 library. So far, I have been able to load all of the glyphs into a data structure where I can easily access them.

Looking at some tutorials around the web, it appears that everyone is using a shader to assist in rending the text. I am not doing anything fancy with the text. All I need to black text on a white background. That is all. Alot of the tutorials appear to be creating a rectangle and rendering the bitmap within the rectangle and coloring the inside of the text with the color of their choosing.

Now, I have been noticing that people are using a shader data structure. Is a shader necessary if I just want to render the text as monochromatic (which would be black)? If so, is there a standard shader that is used all of the time to render text? If so, could someone post a link to the code so that I can start using it? Or, would it be better for me to use freetype-gl? I would like to keep the code to OpenGL 2.0/2.1 to maintain compatibility for my application to many different devices.

As a side note, has anyone come across the OGLFT library for rendering text? The library also uses freetype2 and it does not look like they are using a shader. Again, is it possible to render the freetype2 glyphs without having to resort to shaders?

cant be, everything that “draws” something on screen uses shaders, at least in “modern” OpenGL (3.3 and above, i think). the “shader” (actually its the “program” which has shaders attached to it) draws everything, it fetches vertices from buffer objects, replaces pixels in the framebuffer, etc

thats the easy way to do it, you use a texture with fonts, and take it as a kind of “mask” to determine what pixels of a rectangle to draw
others use aditionally another textfile (e.g. csv file) that describes how big each “font cell” in that font texture is

not really, but its highly recommended, otherwise you’d have to use the “fixed function” pipeline to draw things (“outdated” OpenGL)

you only have to create 1 “shader” (no, a program with shaders attached to it)
in a nutshell:
– a “shader” is a part of a program, it needs to be fed with source code and then compiled
– there are 6 different shader types, 2 of then are only needed for simple drawing (vertex / fragment)
– when compiled, both shaders must be attached to a “program”, which has to be linked then
– when done, you can use it to draw things (like text, star wars ships, cars, whatever you want)

here is my example, its the same as in “”, i just “wrapped” the details away

libs needed: GLEW, SOIL, glm

Hello john_connor,

From your examples, it looks like that for me to render text with freetype2, I need to specify a shader. Which, form the many examples on the internet that render black text, I can easily get the shader source for the text. I can get these sources from almost any example that does text rendering.

For now, I am basing my code off of the tutorial found here:

However, I am getting tripped up with the data structure of Shader. The tutorial did not specify where the author got this data structure.

When I ask if there is a common shader, I am not referring to the source of the vertex/fragment shader but rather the data structure that loads all of that information into it. For example, in the tutorial above, the author uses a data structure called Shader. Is this data structure super common and I can just pull it from some project or should this be something that I develop on my own? If so, is this a combination of using glShaderSource, glCompileShader, and glGetShader?

Now, I have been noticing that people are using a shader data structure. Is a shader necessary if I just want to render the text as monochromatic (which would be black)?[/QUOTE]
Not unless you’re using OpenGL 3+ core profile (where shaders are necessary for all rendering).

If you’re using the fixed-function pipeline, you can create a texture using either GL_LUMINANCE, GL_INTENSITY or GL_ALPHA format. Texture environment and blending modes can be used to control the final colours.

[QUOTE=GClements;1286900]Not unless you’re using OpenGL 3+ core profile (where shaders are necessary for all rendering).

If you’re using the fixed-function pipeline, you can create a texture using either GL_LUMINANCE, GL_INTENSITY or GL_ALPHA format. Texture environment and blending modes can be used to control the final colours.[/QUOTE]

So if I stick to OpenGL 2.0/2.1, then I can get away with creating the texture with GL_LUMINANCE, GL_INTENSITY, or GL_ALPHA. I would assume that i just set all of the values to 0 for black text (which is what I want). Is there some sort of code example online that demonstrates this process?

Just as an update, I was able to get the OGLFT library working and I am currently using this to draw text on the screen. Although, it would be helpful if the question in previous post was answer as I might be revisiting this one day soon.