Fillrate vs. Framerate

I’ve got a 3DLabs VX1 graphics card, which claims to have a fillrate of 220Mtexels/sec (the data sheet’s here if you want to take a look:

I’m pretty sure that running fullscreen at 1152x864 I’m fillrate bound (it’s a simple scene and runs a lot faster at lower rez) and getting about 15 fps. I wanted to compare this number to the advertised fillrate, so I just did this:

(1152x864 pixels/frame)(230Mpixels/sec) = 220(ish) frames/sec.

I’m pretty sure I’m missing an important point here.

Anyone want to point out my foolishness?

Follow up:
Well, I’ve determined that color depth is at least one of the factors I missed. The numbers above are for 32 bit color. At 16 bit color I get an additional 5-6 fps. Still nowhere near the numbers I’m calculating though.

[This message has been edited by Rob (edited 04-13-2000).]

[This message has been edited by Rob (edited 04-13-2000).]

It’s megatexels, not megapixels!
Also you must note that a lot of pixels is drawn many times in the same frame!

So enlighten me-- What’s the difference between a texel and pixel? I always assumed a texel was a texture-mapped pixel, in which case my numbers should still make sense.

Is there a reasonable way for me to predict what framerate I should get based on the fillrate?

a texel is a point of texture. But for a pixel to have the correct color, it can have more than one texel colors blended together or because of z buffering, that pixel maybe had 5 different texel before selecting the right color.


You can’t just calculate the framerate like that (where you got 220 fps). The card is actually doing alot more than just filling pixels to the framebuffer. Every frame you have to clear the doublebuffer, clear the depthbuffer, this is done by the card and take time. You are also (very likley) drawing overlaping polygons, this means you are drawing the same pixel more than once (reducing performance). Blending, shading and depthtest is also reducing performance. When you look at everything that’s done, you probably end up somewhere close to 220Mtexels/sec (if they didn’t “by accident” wrote a larger number ).

By the way, when i calculated the framerate in the way you probaly did, i got about 55 fps (four times lower than yours), are you sure you didn’t forgot to multiply resolution by four to get the actual number of bytes that your screen is taking (32 bit=4 bytes)


Bob, megapixels is not the same thing as megabytes either. No graphic card today fills memory byte for byte.

Of course they don’t, but that doesn’t mean they fill the same amount of pixels at any give colordepth. Higher colordepth means less pixels can be filled per timeunit.

Right, 32 bit is slower then 16 bit. But that’s because of bandwidth limitations to the memory. However, the teoretical fillrate is the rate of pixels the graphic chip can spit out, which is independent of the memory. But if memory lag behind, the graphics pipeline is stalled which means that no further pixels can be rendered until memory catches up with the chip. This is more likely to happend if you got higher colordepths, but 16 bit is usually not twice as fast as 32 bit, more like 10-40% faster depending on memory interface and graphic chip architecture.
So, 220 is the correct number (but of course only teoretical).

[This message has been edited by Humus (edited 04-14-2000).]

Actually, 220Mtexels/s is a teoretical limit for dual texturing. For single texture mode fillrate is 110Mtexels/s.

(in both cases pixel fillrate = 110Mpixels/s)