32 bit float value texture & shading

Hi all,

This is my first post here, so please be gentle.

At the moment I am trying to create a texture that holds 32 bit float values, one 32 bit value for each texel. This texture will be active in TextureUnit_0

During rendering I will have a fragment shader that reads these values and looks into another 1D lookup-texture (TextureUnit_1) for the correct color to display.

I think I have 2 questions:

  1. What is the correct (internal) texture-format in order to store the 32 bit float values. This includes correct MipMapping & Linear filtering to get valid values which I use use to look up into the 1D texture.

  2. When I use texture2D(textureMain, vec2(gl_TexCoord[0])) this always returns a vec4 type while my data is a single 32-bit float.

Shader code:


	uniform sampler2D textureMain;
	uniform sampler1D textureMapping;
	uniform int useTextureMain;
	uniform int useTextureMapping;
	uniform float opacityFactor;
	uniform float blendFactor;

	void main()
	{
		vec4 colorMain = vec4(1.0, 1.0, 1.0, 1.0); // color main is white
		vec4 colorOverlay = vec4(1.0, 0.0, 0.0, 0.0); // color overlay is black

		if(useTextureMain == 1)
		{
			colorMain = texture2D(textureMain, vec2(gl_TexCoord[0]));

			if(useTextureMapping == 1)
			{
				float valueMain = colorMain.x;
				if( valueMain <= 1.0 && valueMain >= 0.0 )
				{
					colorMain = texture1D(textureMapping, valueMain);
				}
			}

			colorMain.a = opacityFactor;
    	}

	gl_FragColor = colorMain;
	}

As you can see in the code above I’m loosing 32 bit precision into 8 bit.

I hope there is a solution available.

Regards,

Ronald

The texture functions returns always 4 components, independently from the internal texture format. Add a suffix like .r or .x to get only the first component.

There is no precision loss for 32 bit float textures, but 8bit textures are interpolated with 8 bit only.

Remember that not all GPUs can filter Float or Halffloat textures. The formats have the suffix _32f or _16f

Thx c2k1,

After your reply, some searching & testing I found a solution.

I am using GL_LUMINANCE16F_ARB now (GL_LUMINANCE32F_ARB has very bad performance) for internalformat. GL_LUMINANCE as format.

The code from shader basically stayed the same. Only thing is that values in the main texture dont have to be normalized between 0.0 - 1.0 anymore. I know make sure that the values are between 0.0 - 4095.0 so I can directly get the color from the lookup-texture.

Anyway the problem is solved & it looks great. :cool:

Regards,

Ronald

Using float as the data type instead of vec4 is possible, as in:
float colorMain = texture2D(textureMain, vec2(gl_TexCoord[0]));

That’s invalid code. You need a .x swizzle on the texture lookup:
float colorMain = texture2D(textureMain, vec2(gl_TexCoord[0])).x;

Thanks. That was sloppy. On my implementation, it worked correctly but produces a warning in the info log.

This can happen if you hit a software fallback, which may be the case if you try to do anything fancy with 32f textures-----blending, for example. Other things like linear filtering simply shouldn’t work with 32f textures (yet).

However, for basic use, 32f shouldn’t be notably slower than 16f.

Agree Lindly,

all worked well with GL_LUMINANCE32F_ARB, until I enabled linear filtering.

Sorry for digging up an old thread, but I have a related problem. I’m using a 32bit floating point grayscale texture using GL_LUMINANCE_32F_ARB and I just don’t like the fact that my floating point values are normalized to the 0…1 range. Is it possible to have the original values forwarded all the way to the vertex shader (I need this data for VTFed terrain)?

I’m using a 32bit floating point grayscale texture using GL_LUMINANCE_32F_ARB and I just don’t like the fact that my floating point values are normalized to the 0…1 range.

Um, there should be no normalization if you’re writing from a fragment shader to a float texture. Sounds like a driver bug.

Nah, it’s the other way around. :slight_smile: I have a 32-bit floating point terrain heightmap whose pixels I use to offset vertices along the Y axis.

But scratch that, I don’t need it anymore. :wink: I’ve found a way to use 16-bit (unsigned short) grayscale textures even on hardware that doesn’t support GL_LUMINANCE16.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.