I’d like to get some clarification as to how OpenGL arranges and stores pixels when using methods like glReadPixels() and glGetTexImage(), and whether than can be altered. I’ve been working in the Blender Game Engine, using PyOpenGL to perform some render-to-texture operations, and I’d like to be able to take these products and use them in the BGE. It seems that Blender’s native methods for reading an image from a buffer require the pixel data to be stored in a single-dimensional array, like so:
[r1, g1, b1, a1, ... rN, gN, bN, aN]
Unfortunately, PyOpenGL seems to store pixel data using multi-dimensional arrays, which are accessed something like this:
pixelData[R, G, B or A][location_x][location_y]
Is this arrangement of data particular to PyOpenGL’s implementation of these methods, or is this how OpenGL structures it’s data?
Also, is there a way to get OpenGL to conform to Blender’s requirements for a one-dimensional array of data?