R32F Textures

I am trying to generate a height map with a compute shader. It works fine with the texture internal format set as GL_RGBA8, but it causing problems with GL_R32F.

I generate the texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, TEXTURE_SIZE, TEXTURE_SIZE, 0, GL_RED, GL_FLOAT, 0);

I bind it to the compute shader like this:
glBindImageTexture(1, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F);

Inside the computer shader I write to it:
vec4 height = vec4(.008 ,0.0 ,0.0, 0.0);
imageStore(destTex, c, height);

No matter what the red component is the texture always looks the same when I render it with a full screen quad.

Any thoughts?

No matter what the red component is the texture always looks the same when I render it with a full screen quad.

When you rendered that texture, did you follow all the rules needed to ensure visibility from your incoherent writes to that texture?

Also, the difference between 0.008 and 0.0 is not particularly large. Indeed, you probably wouldn’t be able to see it.

[QUOTE=Alfonse Reinheart;1266377]When you rendered that texture, did you follow all the rules needed to ensure visibility from your incoherent writes to that texture?

Also, the difference between 0.008 and 0.0 is not particularly large. Indeed, you probably wouldn’t be able to see it.[/QUOTE]

I threw in a glMemoryBarrier(GL_ALL_BARRIER_BITS); after the dispatch to no avail.

Changing just glBindImageTexture(1, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F); to
glBindImageTexture(1, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8); makes it work. But I want the full 32 bits for red.

I changed that r value from .3 to .8 with no difference.

Changing just glBindImageTexture(1, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_R32F); to
glBindImageTexture(1, texture, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA8); makes it work.

Define “work”. Assuming the texture’s actual format is still R32F, all this will do is mess with the high 8 bits of a floating-point value. Or low 8 bits, depending on the hardware. The point is, it won’t actually be storing a floating-point value; it’ll just be poking at the bits of one.

Or to put it another way, it’s rather like this:


union
{
  float f;
  unsigned char i[4];
};

You’re writing four values via i, then reading from f.

Equally importantly, it’s not clear how you’re reading from the texture in such a way that it “works”.

[QUOTE=Alfonse Reinheart;1266389]Define “work”. Assuming the texture’s actual format is still R32F, all this will do is mess with the high 8 bits of a floating-point value. Or low 8 bits, depending on the hardware. The point is, it won’t actually be storing a floating-point value; it’ll just be poking at the bits of one.

Or to put it another way, it’s rather like this:


union
{
  float f;
  unsigned char i[4];
};

You’re writing four values via i, then reading from f.

Equally importantly, it’s not clear how you’re reading from the texture in such a way that it “works”.[/QUOTE]

When it’s set to GL_RGBA8 (just when binding to the compute shader) and GL_R32F when generating the texture, changing that value in the compute shader from .3 to .8 makes it go from dark red to a bright red.
When it’s set to GL_R32F in both places it just doesn’t seem to get brighter in the expected way.

I read the texture in a shader gl_FragColor = vec4(texture2D(sampler, fuv.xy).r,0,0,1); just like that.

Have you tried adding a layout(r32f) qualifier to the image variable? It’s not required for a write-only image, but it may help (if it does, that’s an implementation bug).

Also, I’ve had issues with compute shaders on AMD hardware which went away after putting glFlush() after the glMemoryBarrier() call (glFinish() also works, which is understandable, but glFlush() makes no sense)…