how to display chinese in OpenGL

Does anyone know how to do it?

Originally posted by pango:
Does anyone know how to do it?

I would think that texture fonts will do the trick.

We just had a discussion on Arabic the other day. Search and ye shall find…

And Chinese is much easier than Arabic, since it’s left-to-right and the character->glyph mapping is context-free.

The thread on Arabic is here. http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/009609.html

With English one of the usual ways to render text is too put all the glyphs (characters) into a single texture and then draw a quad for each glyph with texture coords corresponding to the glyph’s positoin on the texture. This works fine since alphabet + punctuation isn’t that many characters. This won’t work for Chinese because there are too many different glyphs.

For Chinese you’ll need to write some sort of caching system. Either creating glyphs on demand or complete phrases on demand and then reusing the results. On windows you can use GDI to render glyphs or phrases to a bitmap and then upload that bitmap into an OpenGL texture.

Other operating systems will likely have simular functionality. If not, you can use FreeType ( http://www.freetype.org/ ) to rasterize TrueType fonts.

I prefer to cache entire phrases myself. When working with English it allow me to have proper kerning and ligatures and it was simple to integrate into my UI library by having every Label widget keep track of its own texture without the need to hash for glyphs.

If you use 16x16 pixel characters (is that enough resolution for the complexity of the characters?), they should fit in a 2048x1024 texture…

“font textures” are seldom the best way to display text.

My standard answer:

Create a DIBSection with its own DC.
Select a font into this DIBSection and draw just like on the screen.
GDIFlush()
glTexSubImage() this data into a texture
Draw quad the size of the texture, with the texture pixels on it.

On Linux, use FreeType to draw text offscreen instead of a DC/DIBSection.

On MacOS, use an offscreen GWorld (I suppose?)

Try this…
http://homepages.paradise.net.nz/~henryj/code/index.html#FTGL

It uses freetype and handles chinese OK including caching etc resulting in high quality glyphs for minimum resource use.

It doesn’t handle complex scripts eg arabic

Originally posted by Zeno:
If you use 16x16 pixel characters (is that enough resolution for the complexity of the characters?), they should fit in a 2048x1024 texture…

What I have found to be more efficient is use a list of 256x256 textures and subload the glyphs which are needed by the app rather than trying to load all the glyphs from a character table into one texture. You potentially end up using more textures, but this is small price to pay.

This technique cuts down on wasted texture usage as you only use what you need, and cuts down on load time both of which would be otherwsie probibitve. I only tested Japanese fonts but it this case there were tens of thosands of glyphs in a single font file, I presume Chinese fonts are too much different in the numbers of glyphs required.

What’s this obsession with loading glyphs and rendering text with lots of little quads?

It’s almost never actually better than drawing text directly to a texture. Certainly, any kind of kerning or ligatures that your font renderer would do for you, you’ll have a hard time emulating with a glyph-based approach.

Typography is important, and hard. Let the libraries that do typography, do it, and you just need to display the results.

Originally posted by jwatte:
What’s this obsession with loading glyphs and rendering text with lots of little quads?

I don’t think it’s an obsession, I think it’s just easier. I, for one, didn’t know that you could do what you suggested in the other thread in a straightforward way…I’m not terribly familiar with windows programming. Have you ever thought about making a tutorial on it?

jwatte’s suggestion is by far the most efficient depending on the update rate of your text which is most cases isn’t very high. You’re not trying to write a word processor in opengl

I have plans to add this ‘feature’ to FTGL, no idea when though.

it may not look as good but it’s miles faster when you’re not relying on win32 to render the font (dunno about other os’s)… win32 font rasteriser is just incredibly slow, especially with antialiasing enabled.

using a texture (or a set of textures if you can’t fit all glyphs in one) allows you to rasterise a font once and simply plop quads around afterwards… considering that win32 font rasteriser scales very badly with increasing font sizes, it may simply be impractical to use underlying font services in real time. my 2p

Originally posted by jwatte:
What’s this obsession with loading glyphs and rendering text with lots of little quads?

It sounded to me that that is what you are doing.

We solved this problem as mentioned above by implementing a texture cache - rendering required strings into textures using GDI (dibs). I think its the best solution, although depending on how many strings you require/cache size, it can be a potential cause of lag.

Originally posted by jwatte:
What’s this obsession with loading glyphs and rendering text with lots of little quads?

Its more an obsession with efficiency.

If you have a lot of text to render you almost certainly will be repeating glyphs (just count how many different letter’s I use in this post - its going to be around 26 no?). Also if your whole screen is filled with text, are you going to create a 1024x1024 or bigger texture?

And what happens when you text changes, or that you have multiple pages or text? You update all your image?

The osgText only uses texture resources it needs, by subloading into texture tiles the glyphs which are need. The typography is important of course, so the quads are position according to what freetype. This really isn’t a great overhead.

Another advantage of rendering small quads is the reduction in fill rate requirements, important since our graphics cards are invariably limited by fill rather than T&L. They also often limited by bandwidth so you want to keep those textures small as possible and update them in small bits, rather than updating big textures all the time.

So I’m more than happy to be obsessed about drawing lots of little quads :slight_smile:

Robert.

I suggest you actually go benchmark the concepts of:

One texture per “text item”

vs

One font texture, and one quad per character

In our case, our application is very text intensive (chat bubbles, communicator windows, etc) and we found draw-to-texture to be faster. Unless you update your text items very often, GDI-to-texture is likely to come out ahead, AND it looks better.

The “cached strings” approach works well, too; “strings” treated loosely. As long as you make sure your cache is big enough that you don’t need to re-generate every frame :slight_smile:

One 1024x1024 texture is likely to fit all text strings you’ll need to display on the screen at a time; you can TexSubImage updated strings into this texture when strings change.

Originally posted by jwatte:
[b]I suggest you actually go benchmark the concepts of:

One texture per “text item”

vs

One font texture, and one quad per character

In our case, our application is very text intensive (chat bubbles, communicator windows, etc) and we found draw-to-texture to be faster. Unless you update your text items very often, GDI-to-texture is likely to come out ahead, AND it looks better.

The “cached strings” approach works well, too; “strings” treated loosely. As long as you make sure your cache is big enough that you don’t need to re-generate every frame :slight_smile:

One 1024x1024 texture is likely to fit all text strings you’ll need to display on the screen at a time; you can TexSubImage updated strings into this texture when strings change.[/b]

The implemention that I opted for with osgText was designed to handle 10,000’s of text labels per second, I’m getting 20fps in such heavy loads, and this is with a Geforce3 Ti200, not the latest a greatest graphics hardware. You quite simply can’t get the same peformance for having 1,000’s seperate textures.

You can also get excellent image quality for using seperate Quads, the rendering everything to a single texture gains you nothing in image quality.

What’s appropriate for a particular application will depend on the needs of that application. If you a really pushing heaving text loads than the approach of using seperate quads for each glyph, and sharing textures is essential. If you have modest needs for text then rendering whole strings to textures can work fine.

Robert.

I don’t see why you believe I suggest “1000s of texture”. I said use a single texture and sub-load.

I am writing this commentary more for whomever reads this thread to make a decision; you seem to have a system that works well for you and that’s all good.

It would seem to me that the MORE text you want to draw, the BETTER pre-rendering strings to texture would be, because you won’t go vertex transfer bound at all as easily. 10000 strings at 20 frames per second would be 200,000 quads using pre-rendered, and easily 8,000,000 quads if each string is on average 40 character.

If you think generating all those strings in software is expensive, then consider two things:

  1. You can do it while the card is rendering, so it’s no more expensive than waiting on a stalled geometry transfer bound card.
  2. The user can only read maybe 100 characters per second, if he’s REALLY good. If you find yourself updating more than that, then your design is probably not such that the user is intended to read all the text.

You can also get excellent image quality for using seperate Quads, the rendering everything to a single texture gains you nothing in image quality.

Please look up “kerning” and “ligatures” in your favourite typography book. It seems that you account for neither of those (but I haven’t actually seen a screen shot, so I don’t know for sure). Also, proper shape-based anti-aliasing with sub-pixel precision is not possible when just blitting a small quad with a single character. My pre-press background is haunting me here – these are important, subtle things that are hard to get right.

I don’t think anyone thinks that putting all your glyphs into one texture isn’t a good idea. In fact it’s essential to get anything remotely like good performance.

Also, just because to are using one glyph per quad doesn’t mean you can’t kern the glyphs or draw ligatures and composites correctly. FTGL currently handles kerning using the multi quad technique. Correct anti aliasing isn’t a problem either.

I have to agree with John though that ‘string’ caching is the fastest for the reasons stated as long as you don’t blow your texture budget.

Originally posted by jwatte:
I don’t see why you believe I suggest “1000s of texture”. I said use a single texture and sub-load.

If each text label has in indepent string then you’ll need lots of texture space to fit them all in, with 10 or thousands of labels you’re gunna many more than a single texture, you’re gunna need hundreds or even thosands of textures depending on the resolution of your texture.

[b]
It would seem to me that the MORE text you want to draw, the BETTER pre-rendering strings to texture would be, because you won’t go vertex transfer bound at all as easily. 10000 strings at 20 frames per second would be 200,000 quads using pre-rendered, and easily 8,000,000 quads if each string is on average 40 character.

[b]

Indeed you start become bound by bandwidth of passing all the quads down, but this is better than better than bandwidth of passing all the texure required.

A 16x16 res glyph with lumuniance and alpha takes 16x16x2 = 512 bytes.

A single quad takes 4x12 (for the coords) + 4*8 for the tex coords = 80 bytes.

This is for a small res of a glyph, and even here one can see thats going to be alot cheaper to send quads than the lots of texture.

One could save on texture bandwith by using bitmaps but this results in poor image quality, as one can’t take advantage of the anti-aliasing quality options provide by the likes of freetype.

If you want text which scale to thosands of labels then the only sensible option is to load the glyphs into a small set of textures and then render them as quads. The OSG’s osgText implemention will often be able to use a single texture for all the text labels, so is able to render them without state changes, this means we’re getting near to about as optimal as one can get.

Robert.