# Problem with TBO : multiple samplerBuffer

Hello Everyone =)

I would like to have a little program that would allow me to draw a model by using several buffers containing my vertex positions and only one buffer for the indices. In fact, I’m trying to address the problem of Laobracusa as explained in this thread :
http://www.opengl.org/discussion_boards/…6432#Post276432

My explanation is quite long and english is not my mother tongue but I would be grateful If you could help me with this problem.

As Dark Photon and Alphonse Reinheart suggested, I’m using TBO and a GLSL vertex shader to do the trick but I’m still quite new to GLSL. So I have a single triangle with a VBO containing some dummy positions and the indices are simple : 0,1,2.

``````
GLuint vboIndices;
GLuint vboVertices;

float VerticesDummy[] = {
-1.0f,0.0f,0.0f, //V0D
1.0f,0.0f,0.0f, //V2D
0.0f,1.0f,0.0f  //V1D
};

int IdxDummy[] = {
0, 1, 2};

// Dummy Triangle : biggest one
// ----------------------------

glGenBuffers(1, &vboIndices);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vboIndices);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned int) * 3, IdxDummy, GL_STATIC_DRAW);

glGenBuffers(1, &vboVertices);
glBindBuffer(GL_ARRAY_BUFFER, vboVertices);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * 3 * 3, VerticesDummy, GL_STATIC_DRAW);

``````

I want to draw two triangles, a small one inside another bigger one. So I create two TBO containing each three vertices for the two triangles.

``````
GLuint tboVertices[2];

float VerticesT1[] = {
-0.5f, 0.0f, 0.5f, 1.0f, //V0
0.5f, 0.0f, 0.5f, 1.0f, //V1
0.0f, 0.5f, 0.5f, 1.0f  //V2
};

float VerticesT2[] = {
0.25f, 0.25f, 0.75f, 1.0f, //V3
-0.25f, 0.25f, 0.75f, 1.0f, //V4
0.00f, 0.00f, 0.75f, 1.0f  //V5
};

glGenBuffers(2, tboVertices);

// Big Triangle : A
glActiveTexture(GL_TEXTURE0);
glBindBuffer(GL_TEXTURE_BUFFER, tboVertices[0]);
glBufferData(GL_TEXTURE_BUFFER, sizeof(float) * 4 * 3, VerticesT1, GL_STATIC_DRAW);
glTexBufferEXT(GL_TEXTURE_BUFFER, GL_RGBA32F, tboVertices[0]);
glUniform1iARB("positionBuffer1", 0); // pseudo-code : I'm using glUniform correctly

// Small Triangle inside the big one : B
glActiveTexture(GL_TEXTURE1);
glBindBuffer(GL_TEXTURE_BUFFER, tboVertices[1]);
glBufferData(GL_TEXTURE_BUFFER, sizeof(float) * 4 * 3, VerticesT2, GL_STATIC_DRAW);
glTexBufferEXT(GL_TEXTURE_BUFFER, GL_RGBA32F, tboVertices[1]);
glUniform1iARB("positionBuffer2", 1);

``````

Then I draw that single dummy triangle several times (2 in this example) with glDrawElementsInstanced(), the positions are “updated” by a GLSL shader.

``````
glEnableClientState(GL_VERTEX_ARRAY);

glColor3f(1.,1.,1.);
glBindBuffer(GL_ARRAY_BUFFER, vboVertices);
glVertexPointer(3, GL_FLOAT, 0, (char *) NULL);
glDrawElementsInstanced(GL_TRIANGLES, 3, GL_INT, (char *) NULL , 2);

glDisableClientState(GL_VERTEX_ARRAY);

``````

In my vertex shader, I pick the real vertices from the positions samplerBuffer. The first triangle A is in positionBuffer1 and the second one B is in positionBuffer2.

``````
#extension GL_EXT_texture_buffer_object : enable

uniform samplerBuffer positionBuffer1;
uniform samplerBuffer positionBuffer2;

void main(void)
{
vec4 position = gl_Vertex;

if (gl_InstanceID == 0)
{
position = texelFetchBuffer(positionBuffer1, gl_VertexID);
gl_FrontColor = vec4(1.0,0.0,0.0,1.0);
}
if (gl_InstanceID == 1)
{
position = texelFetchBuffer(positionBuffer2, gl_VertexID);
gl_FrontColor = vec4(0.0,1.0,0.0,1.0);
}
gl_Position = gl_ModelViewMatrix * position;
}

``````

My problem is that it seems that my program only draws the last bound triangle (B) two times. I thought that using texture units would allow me to pass multiple samplerBuffer but it is not the case. Another problem is that I can’t get the correct indices from an isamplerBuffer. In the second example, I add a TBO to pass the triangles indices.

``````
int Indices[] = {
0, 1, 2, 0,
0, 2, 1, 0};

GLuint tboIndices;

glGenBuffers(1, &tboIndices);
glActiveTexture(GL_TEXTURE2);
glBindBuffer(GL_TEXTURE_BUFFER, tboIndices);
glBufferData(GL_TEXTURE_BUFFER, sizeof(int) * 4 * 2 , Indices, GL_STATIC_DRAW);
glTexBufferEXT(GL_TEXTURE_BUFFER, GL_RGB32I, tboIndices);
glUniform1iARB("indexBuffer", 2);

``````

My vertex shader would be as followed.

``````
uniform samplerBuffer positionBuffer1;
uniform samplerBuffer positionBuffer2;
uniform isamplerBuffer indexBuffer;

void main(void)
{
vec4 position = gl_Vertex;
ivec4 index = texelFetchBuffer(indexBuffer, gl_InstanceID);
int vertexIndex;

if (gl_VertexID == 0)
vertexIndex=index.x;
if (gl_VertexID == 1)
vertexIndex=index.y;
if (gl_VertexID == 2)
vertexIndex=index.z;

if (gl_InstanceID == 0)
{
position = texelFetchBuffer(positionBuffer1, vertexIndex);
gl_FrontColor = vec4(1.0,0.0,0.0,1.0);
}
if (gl_InstanceID == 1)
{
position = texelFetchBuffer(positionBuffer2, vertexIndex);
gl_FrontColor = vec4(0.0,1.0,0.0,1.0);
}
gl_Position = gl_ModelViewMatrix * position;
}

``````

I think that the problem is probably passing multiple samplerbuffer to the shader. Besides, I’m using 4 coordinates vertices but I would like to have only 3. However, the function texelFetchBuffer seems to read only 4 coordinates. I tried to modify the internal texture format but it doesn’t seem to work without rgba. Is there a way to fetch a 3 coordinates vertices ?

Thank you very much for having read my post. I hope you might help me understanding the problem.

I have finally found where was the problem : I forgot to bind the texture with glBindTexture() after activating the texture unit.

Sorry to bother you with this but I still have a question. Is there a way to read a texture buffer as a RGB buffer and not a RGBA buffer even if the function texelFetchBuffer() returns a gvec4 ?

The internal texture format doesn’t seem to help

What’s your goal here? the GLSL texture sampling function always returns a 4-vec IIRC, regardless of the number of underlying components (except for shadowmap lookups).

You can make the texture buffer 1 component only (luminance), and then in the shader just pay attention to the .x component of value returned from the texture sampling function.

Sorry, I didn’t explain my goal very well. As I’m not very sure of my english, I’ll give you an example. Suppose this is a TBO of indices to draw three triangles :

``````
int Indices[] = {
0, 1, 2, 0,
0, 2, 1, 0,
1, 2, 0, 0};

``````

TexelFetchBuffer() function returns a vec4 with (0,1,2,0) at the first instance of glDrawElementsInstanced(). At the second instance, it returns (0,2,1,0) and (1,2,0,0) at the last instance. In that case, I just ignore the w component of the vec4 returned by the function. However, I would like to not have to send a useless 4th index for each triangle as followed :

``````
int Indices[] = {
0, 1, 2,
0, 2, 1,
1, 2, 0};

``````

The function would still return a vec4(0,1,2,0), but I would get (0,2,1,1) at the second instance and (1,2,0,?) at the last. That way, I would be able to pay attention only to the x,y and z components. Is this possible ?

I hope I’ve been more comprehensible. Thank you very much for helping me on this problem.

Allocate texture buffer RGB32F format. Or allocate LUMINANCE32F and call texelFetchBuffer() three times and just use the .x of each.

One little bit:

GL 3.x does not support 3 component texture buffer objects, only GL 4 does (or if GL_ARB_texture_buffer_object_rgb32 is listed in the extensions).

Thank you very much everyone =)

I tried again with RGB32F format but It didn’t work since I have OpenGL 3.3. So I’ll just have to wait for a new GPU with OpenGL 4.0.

Hello !

I now have an ATI graphic card with Opengl 4. I installed the latest ATI driver (10.8) but it seems that it still doesn’t support RGB32F and RGB32UI format T__T

I used this in my shader :

``````
#version 400
#extension GL_EXT_texture_buffer_object_rgb32 : enable
#extension GL_ARB_texture_buffer_object_rgb32 : enable

``````

None of these extensions seem to be supported. However, when i use glGetString(GL_EXTENSIONS), GL_EXT_texture_buffer_object_rgb32 is in the list.

Do someone have an idea on why I still can’t use RGB32 format even with 0penGL 4 ?

but it seems that it still doesn’t support RGB32F and RGB32UI format T__T

So, what does it do? Do you get any GL errors?

I used this in my shader :

The extension doesn’t extend GLSL.

However, when i use glGetString(GL_EXTENSIONS), GL_EXT_texture_buffer_object_rgb32 is in the list.

There’s no such thing as GL_EXT_texture_buffer_object_rgb32. There is GL_ARB_texture_buffer_object_rgb32.

hi alfonse,

i’m with Kohane in this shader. We have been trying to make multiple buffers one single mesh. We have managed to create a shader that can handle multiple position buffers and one single index buffer thanks to you and DK with your suggestions in this post http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=276432#Post276432 (btw, thank you again). However there’s this 4 component problem.

To answer your question, no, there’s no error from GL, it simply handles it as it was a 4 component GL_RGBA buffer. The GL_RBG32 flag seems to be ignored or something. Actually, when building the shader program we have a warning stating that the GL_ARB_texture_buffer_object_rgb32 is not supported.

About your comment: “The extension doesn’t extend GLSL.” what you mean exactly? Do you mean that if the extension is in the extension list we don’t need to enable it (are they enable by default)? And if it is not in the list the enable statement will be just ignored or generate this kind of warning?

And for the GL_EXT_texture_buffer_object_rgb32, it seems odd as a matter of fact, but it is indeed in the extension list of the driver!

No one in here never tried doing “pseudo” 3 component fetches from texture buffer in shaders? It’s quite surprising that we are alone on this walk!

Actually, when building the shader program we have a warning stating that the GL_ARB_texture_buffer_object_rgb32 is not supported.

That’s interesting. I wonder how it knew that you were trying to only get 3 components from the texture. Nice enough for them to tell you, though.

It is odd that it’s giving you this warning when you’re using GL 4.x, which has this as core functionality.

About your comment: “The extension doesn’t extend GLSL.” what you mean exactly? Do you mean that if the extension is in the extension list we don’t need to enable it (are they enable by default)? And if it is not in the list the enable statement will be just ignored or generate this kind of warning?

Every OpenGL extension extends the OpenGL specification, for obvious reasons. But not every extension extends GLSL. That is, not every extension causes a change in the syntax of GLSL.

For example, ARB_sampler_objects does not modify GLSL in any way. Therefore, shader code does not need to say that it uses this extension with a #extension declaration, since nothing in the shader language is affected by the presence of the extension. By contrast, ARB_texture_gather does modify GLSL. It adds new standard functions to the language. So the GLSL compiler needs to know that the extension is being used by this shader; therefore, you must declare it with #extension.

And for the GL_EXT_texture_buffer_object_rgb32, it seems odd as a matter of fact, but it is indeed in the extension list of the driver!

Well, there’s no spec for it in the registry and it’s not mentioned in the .spec files. So the driver seems to be talking about something that doesn’t exist.

Sounds like a driver bug. Are you sure you’re up-to-date?

No one in here never tried doing “pseudo” 3 component fetches from texture buffer in shaders? It’s quite surprising that we are alone on this walk!

Well, you are talking about an extension that is barely 6 months old. Also, it’s not core 3.x/DX10 functionality (though obviously some 3.x hardware does support it), so naturally it will be less frequently used. And texture buffers weren’t exactly the most used GL 3.x feature to begin with.

i’ve just got this reply from amd: “the name of the extension should have been ARB instead of EXT (typo which we will fix)” lol?

thx, good to know.

lucky me that I didn’t need it before

we could test the shader on a nvidia card and rgb32 runs just fine now need only to check how poor will 96bits alignment perform versus native 128bits alignment due to cache line offsets. hope that fps wont drop much.

Is there an equivalent extension for glsl ES?

This is the OpenGL ES Extension Registry. If the extension is not listed there, it does not exist.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.