Pixel Buffer Objects - supported pixel formats

Hi all,

when trying to fill a 2D texture from a PBO
i noticed that the pixel formats
RGB, RGBA and BGR are extremly slow.
They were indeed that slow it seemd to me the driver does in this case a roundtrip from the card memory to CPU RAM and back to the card.

I used a Quadro FX2000 with a 61.xx driver!

Using the BGRA pixel format everything seemed to
work well.

I do not notice a speed difference between RGBA and BGRA formats when filling the texture fram a CPU RAM array.

Does anybody know what pixel formats are suppoted by NVidia HW out from a PBO and if it is likely
that the other formats will be also suppoted in the future?
Or am i simply the last person that still uses
the RGBA format :slight_smile:

Thanks!

Yup. The rest of the world uses BGRA :slight_smile:

I’m using 16 bit floating point RGBA PBO’s and they seem to work just fine.

FYI I’m using a GF 6800 GT with 66.00 drivers.

Greetz,

Nico

Thanks for that information Nico.

I didn’t use floating point textures but
GL_RGBA8 as internal texture format where the data is handled as GL_UNSIGNED_BYTE.

So it seems that BGRA8 allows a faster
copy from the PBO to the texture without
doing further permutations.

By the way does anyone know how the texture
internal format looks like? Does it look the same
as in the PBO (indexed along lines) or is there
some kind of cache tiling (block indexed) done.
If the later is not the case it would be great if one could write directly into a texture.

I would be very, very surprised if textures were not swizzled into tiles. My guess would be somewhere along the lines of 8x8 tiles, which gives you 256 bytes per tile for 8-bit 4-component textures. Another alternative, possibly more popular on lower-end hardware, would be 4x4, which coincidentally corresponds to a DXT compression block…

In fact, the GL vendors have previously given advice that texturing out of rendereable textures may be slower for certain orientations than for others. If that’s not a gross hint as to how things work internally, well, anything can happen :slight_smile:

I did some search in this forums history and
it seems that even 2D textures are swizzled.
This seems to be one in the biggest issues
in render to texture.

See e.g. the links

Link 1
or also
Link 2

So if the data has to be swizzled anyway when
doing a glTexSubImage() call from the PBO, maybe
there is hope that the RGBA format will also be
supported in the future for this purpose on NVidia
hardware. :slight_smile: