I wonder where the difference between theoretical and practical texture upload rate comes from. On Nvidia’s 6800u I get approximately 1GB/s if the driver has a good day and the format and texture size is not evil.
Consider that AGP 8x has a peak transfer rate of 2GB/s, PCI-Express has peak 4GB/s (correct me if I’m wrong)
Where does the difference come from? This is a factor of 2 or 4. Often the factor is 8 or more, that’s nearly an order of magnitude lower than peak. Some might say, 1GB/s, OMG, that’s hell of a lot, but consider that we could need that for streaming or image processing to be able to use GPUs successfully.
If you have good AGP drivers you can achive on AGP 8x systems up to ~1.8GB/sec. There is few rules:
- Use GL_BGRA pixel format
- Use PDR or PBO (multi PBO)
- Create texture once and update with glTexSubImage2D.
Originally posted by yooyo:
If you have good AGP drivers you can achive on AGP 8x systems up to ~1.8GB/sec.
Really, that’s nice. Could you tell the exact hardware+software combination you used to get 1.8GB/s?
Has anybody got measured numbers for PCI-Express? It should double the AGP numbers, right?
Almost any AGP 8x Nvidia based card with newer drivers and with good AGP driver. Im using PDR and GL_BGRA pixel format.
I was try this on GF4800SE, FX5900XT, 6800GT, 6800U (all AGP8x) mainly on Intel chipset with wide range of nvidia drivers. Be sure to install correct MB chipset drivers.
There is no anything more specific. Texture uploading isn’t so exsotic. Take a look at end of PBO extension doc’s and you’ll see example of usage PBO or PDR texture streaming.