Crashes with ARB_texture_buffer_object

I have written a shader using this extension but each time I activate the code that calls texelFetchBuffer I get a crash (division by zero) in the driver. The worst is that the crash occurs in a secondary thread so I can’t track this down to anything.

I verified my texture buffers and they contain the proper data. I also checked that they are bound to the proper texture units - but this doesn’t seem to matter regarding the crash. No matter how much of my data I initialize, I always get the same crash in the driver and I’m out of ideas.

I tried to strip the shader down to the barest minimum - and it still crashes.

Can anyone here point me to some demo code which sets up and uses a texture buffer? Hopefully that may give me some clues what is wrong with my code.

This is probably a driver bug. Submit it to the maker for your graphics card.

Yeah, sure, that may well be the cause. But I’d really like to make sure first that I didn’t overlook something really stupid, that’s why I asked for working demo code because I can’t seem to find any.

I’m using a Geforce 8600GT with 190.62 drivers btw.

Fwiw, my preliminary (albeit trivial) testing has proved fruitful on NV hardware (GF8800-190.57).

A thing or 2 to look out for:

  • No TexParameter calls with TEXTURE_BUFFER as target
  • Check for valid format (3.1 spec, p. 140)

As a sanity check you basically …

  • create a buffer with target TEXTURE_BUFFER
  • create a texture with a valid format
  • make texture active and attach buffer (TexBuffer)
  • bind texture to a sampleBuffer in shader (like you would a normal texture)
  • sample with {i|u}texelFetch

I went back to the start trying to get the most simple shader to run, I was partially successful but now I got a problem that’s even weirder.

This is my shader:


#version 120
#extension GL_EXT_gpu_shader4 : enable
uniform samplerBuffer lightRGB;
uniform sampler2D tex;

void main()
{
	int index = int(clamp(gl_TexCoord[0].s, 0, 32.0));
	vec4 light = texelFetchBuffer(lightRGB, index);
	vec4 texel = texture2D(tex, gl_TexCoord[0].st);

	gl_FragColor = texel*2.0 + light/20.0;
}

But I can’t seem to get the data from both samplers. Using only one is fine and I get the proper data for both as long as the other one is not accessed.
But as soon as I try to get data from both the one from the first sampler is just ignored. In the code above I only get the regular textures but if I switch the 2 lines of code around I only get the data from the texture buffer. I also tried to use #pragma optimize but that makes no difference.

Here’s the code that sets up the texture buffer. This seems to work fine as I can get the data out of this if I don’t try to access the other texture:


	glGenBuffers(1, &idbuffer);
	glBindBuffer(GL_TEXTURE_BUFFER, idbuffer);
	static unsigned char testdata[] = {
		255,255,255,0,
		255,  0,  0,0,
		  0,255,  0,0,
		  0,  0,255,0,
		255,255,  0,0,
		  0,255,255,0,
		255,  0,255,0,
		255,255,255,0,
		255,  0,  0,0,
		  0,255,  0,0,
		  0,  0,255,0,
		255,255,  0,0,
		  0,255,255,0,
		255,  0,255,0,
		255,255,255,0,
		255,  0,  0,0,
		  0,255,  0,0,
		  0,  0,255,0,
		255,255,  0,0,
		  0,255,255,0,
		255,  0,255,0,
		255,255,255,0,
		255,  0,  0,0,
		  0,255,  0,0,
		  0,  0,255,0,
		255,255,  0,0,
		  0,255,255,0,
		255,  0,255,0,
		128, 64,  0,0,
		 64, 32,  0,0,
		  0,128,255,0,
		  0, 64,128,0,
	};
	glBufferData(GL_TEXTURE_BUFFER, 128, &testdata[0], GL_STREAM_DRAW);

	glGenTextures(1, &idtexture);
	glBindTexture(GL_TEXTURE_BUFFER, idtexture);
	glTexBufferARB(GL_TEXTURE_BUFFER, GL_RGBA8, idbuffer);

	glActiveTexture(GL_TEXTURE14);
	glBindTexture(GL_TEXTURE_BUFFER, idtexture);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, 0);

I am using NVidia’s latest beta driver which I installed in the hope that the crash I had yesterday would go away but it didn’t help there.

Your texture bindings are confusing. Why are you binding the texture to the 15th texture unit?
Looks like you are binding the texture name “idtexture” to the TU0 then specifiying its data, then binding again the same name but to TU14 and then binding the default texture name to the TU0.

What kind of texture is bound to TU0 finally? Is it the one sampled by the sampler “tex”? Are you sure that the samplers “tex” and “lightRGB” are “attached” to the right texture unit?

I did this because I thought that binding it to TU14 for initialization was the reason for the errors. Turned out it wasn’t. It doesn’t work either way.

Why am I using TU14? The test code is from a larger project which already had TU14 properly initialized.

What kind of texture is bound to TU0 finally? Is it the one sampled by the sampler “tex”? Are you sure that the samplers “tex” and “lightRGB” are “attached” to the right texture unit?

TU0 will finally contain just a regular RGBA8 texture. Yes, it’s the one sampled in ‘tex’. Yes, I am sure that the textures are attached correctly. As I said, each one being used alone in the shader produces correct results. Just when I sample both I only get the value from the second one, whichever that is.

So, where do you set your sampler uniforms? They need to match the bound units.

The samplers are set correctly.
Do I need to repeat again that I get the correct data out of both samplers as long as I only use one?

If I hadn’t set the samplers properly that wouldn’t work, would it?

So again. If I use:


#version 120
#extension GL_EXT_gpu_shader4 : enable
uniform samplerBuffer lightRGB;
uniform sampler2D tex;

void main()
{
	int index = int(clamp(gl_TexCoord[0].s, 0, 32.0));
	vec4 light = texelFetchBuffer(lightRGB, index);
	vec4 texel = texture2D(tex, gl_TexCoord[0].st);

	gl_FragColor = texel*2.0 + light/20.0;
}

I only get the data fron ‘tex’.

And if I use


#version 120
#extension GL_EXT_gpu_shader4 : enable
uniform samplerBuffer lightRGB;
uniform sampler2D tex;

void main()
{
	int index = int(clamp(gl_TexCoord[0].s, 0, 32.0));
	vec4 texel = texture2D(tex, gl_TexCoord[0].st);
	vec4 light = texelFetchBuffer(lightRGB, index);

	gl_FragColor = texel*2.0 + light/20.0;
}

I only get the data from lightRGB. No changes to my C++ code whatsoever.

That’s crazy, you just switch two lines which are completely independent of the result of the other one and you get totally different results…

Is possible to post the assembly code? I’ll try to compile this with the cg compiler when I would have time.

Where can I get assembly code of a GLSL shader from?

As dletozeun pointed out you can use NVIDIA’s command line compiler, cgc.

Using NVEmulate is a breeze too - just throw the Write Program * switches and you’ll get object source, assembly and info output into your program’s folder.

I just updated to NVidia’s latest official driver. The problem is still there but this time I was able to get some more information:

  1. glTexBufferARB doesn’t seem to work as expected when it’s done on any other texture unit than 0.
  2. When I read the texture first and the buffer second the texture always seems to produce a black pixel but the buffer produces correct data.
  3. When I read the buffer first and the texture second the texture seems to be ok and the buffer seems to return undefined data. This data appears to depend on the texture because it changes when the texture changes
  4. When the buffer is invalid I always get the same results as in 3, no matter in which order I read the samplers.
  5. Switching texture unit 14 with any other one, including 1 produces identical results so the texture unit being used for the buffer does not seem to matter.

Ketracel, any chance you can send me a repro case so that we can take a look and debug this?

Thanks,
Barthold
(with my NVIDIA hat on)

I’ll try to make a smaller program. Currently my tests are done in a larger project which I can’t really send.