Model to Texture Problems


I’m trying to get a Model and a bool array to the GPU to do some calculation with it. There were not so many Problems putting the Model to the texture (after I noticed that GL will clamp all values to [0;1]). The bigger Problem is the bool Matrix.
Because this Matrix is quite big (numVert x numVert) but holds one bit per Vertex effectively I transformed 8 bits to a Byte with the goal to extract it on the GPU. This should not be a Problem with a little code snippet like:

bool b = byte & (128 >> (gl_VertexID & 7) )

So I got 8 bool values coded in a Byte. To get these bytes to a Texture I build up a Texture with Parameters GL_RED and GL_UNSIGNED_BYTE. Thats not effective, but should work I think.
But then the Problem starts:
When I try to get the Byte on the GPU with

int byte = int(texture2DRect(tex, vec2(loopID, vertID)).r);

there will be no correct int value in range [0;255]. That should be, because GL transformed the Texture Values to [0;1].
But Transforming this to [0;255] by multiplying with 256. does not work for me.

So my Questions are:

  • How is clamping done in GL? Does GL looks for the biggest Value in the texture and divides by it? Or do they use the Maxvalue of the Datatype used for the texture to transform it to a float.
  • does anyone know a good way to get my float from the texture back to an int? (ceil / floor? rounding would be great)

Hope that anyone can help me with this Problem. I really asked myself how to get things working with GPGPU.


A little update after reading this Post:
I switched my Texture Datatype to GL_RED_INTEGER to skip conversation to float. So i hoped to get the Red-Byte directly as an Integer in my Shader. Can anyone approve that i can use texture2DRect with the Byte-Texture in #version 120 with NPOT Textures Ext.
Got that info from:


This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.