Rendering large amount of text.

Do you draw them AT THE SAME TIME though?

You could set aside one big texture, or several smaller textures, for the text that’s actually drawn during the current frame. You’d set aside more space for the formatted bitmaps in main RAM, which hopefully is more plentiful than VRAM.

Then, when rendering, you’d use an LRU cache of texture images; when you need to render a specific block of text, you see if it’s already in a texture; if so, bind the texture, and put the texture first in the LRU list. If not, you take the texture last in the LRU list, and TexSubImage() your prepared data into it, and stick it first in the list.

Your LRU list should be bigger than the rendering needs of a single frame, for ideal frame rate :slight_smile:
Just curious, regarding details of implementation…

Do you have any recommendations regarding schemes for packing multiple text strings into one texture? I imagine you could get quite complex with this… you basically have a “2D” texture cache. Packing a bunch of rectangular textures into a larger one isn’t solvable in polynomial time… IIRC. Or do you prefer to keep each text string in a separate texture?

We keep each sub-texture power-of-two, and only pack single lines, so they’re all the same height. You then end up with the problem of packing “words” or “lines” into a sequence of available lines, which is significantly easier.

And just because a problem is NP-complete to solve OPTIMALLY doesn’t mean it’s impossible, just that it’s expensive if you want the optimal solution. But we don’t shoot for optimal, just good enough.

IIRC, we split the texture into (height/fontheight) strips, each of which is managed as to what parts are drawn into. Then we do a first-fit scan of each strip/row to find one that has a hole big enough for the new piece of text we want to add.

@M.Mortier

Problem : big slowdown from 5000+ letters on, due to the high pixel fill, I think…(I want to be able to zoom in and out of the text), since

Can you tell us framerate? I just test my font render. Here is some bench result:

1280x1024@32bpp
GF Ti 4800Se
CPU: P4 2.8Ghz

20000+ chars, Tahoma - Regular 26pt
86 FPS

20000+ chars, Tahoma - Regular 42pt
62 FPS

6000+ chars, Tahoma - Regular 42pt
152 FPS

All chars are visible!
All chars are rendered using immediate mode.
No display lists, no vertex arrays.

I can’t belive that you have so big slowdown. It maybe something in your code.

yooyo

We keep each sub-texture power-of-two, and only pack single lines, so they’re all the same height. You then end up with the problem of packing “words” or “lines” into a sequence of available lines, which is significantly easier.

And just because a problem is NP-complete to solve OPTIMALLY doesn’t mean it’s impossible, just that it’s expensive if you want the optimal solution. But we don’t shoot for optimal, just good enough.

IIRC, we split the texture into (height/fontheight) strips, each of which is managed as to what parts are drawn into. Then we do a first-fit scan of each strip/row to find one that has a hole big enough for the new piece of text we want to add.
Thanks! I was curious because I implemented something similar a while back also. I basically did things pretty much the same.

I split up a large texture into rows of fontHeight. Each row could only be divided in half, and then those half-pieces in half… etc, producing a kind of “binary-buddy” style tree. My hope was that by “allocating” chunks that were only powers of 2 long, I could reduce the fragmentation of the texture. For each sub chunk size (1, 2, 4, … 1024, 2048) I kept a free list. If a chunk wasn’t available of that size, I’d start splitting down the larger ones. When a chunk gets freed, I’d check if it can be joined with its neighbor.