Pixel buffer objects not working while glTexSubImage2D does

Hello I am trying to pass a matrix of short (16 bit) values to 16bit shorts texture, when I flatten the matrix and then pass 1D array to open gl all works well; yet this flattening is relatively slow, I suppose that openGl has some performant way to do it.
I tired what below without success


It’s hard to answer without knowing what the source layout of this matrix is. However, it would probably be faster to just make the matrix class always work “flattened”.

Thank you @Alfonse_Reinheart yet I need to keep it in a n dimensional array - as the opengl is just doing sum segmentation visualization of 3d computer tomography scan, and I am purposfully only slices to the GPU instead of whole image in order to keep GPU memory for segmentation computations, which needs 3 or 2 dimensional array.

In case of layout data is stored in hdf5 loaded in Julia to Int16 3 dimensional array, then I take slice and get Int16 2 dimensional array.

As stated earlier julia Int16 is the same as OpenGL Short as in case of this “flattened” 1 dimensional array all works well.

If the data cannot be used in a contiguous fashion, then you’re going to have to pay the cost of making the data contiguous so that OpenGL can use it.

Note that we’re talking about the data being contiguous in memory. I know nothing about Julia or how it deals with multidimensional arrays. Does it store them contiguously in some way? If so, what is the layout that it uses to store its data?

Thanks for response !
Yes according to documentation elements are stored contiguously in column-major order

Column major? That sounds like a problem that can be solved by re-arranging your texture coordinates (ie: U is the column, V is the row, W is the depth).

1 Like

Thanks you now it works ! Perfect!