How to send non-normalized/unclamped values to GLSL shader using opengl textures?

I am trying to send an array of float values(height map) to GLSL as a GL_TEXTURE_2D. While I have the samplers set right, the issue comes from the clamping that happens when I upload the texture using glTexImage2D().

  glGenTextures(1, &_idnewTID);
  glBindTexture(GL_TEXTURE_2D, _idnewTID);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
  glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

  glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, regMesh._cols, regMesh._rows, 0, GL_RED, GL_FLOAT, regMesh._heightMap);

The values that I am sending are in the range 900.0f to 1590.0f. But this gets completely clamped out to 1.0f as it loads into the GL_TEXTURE_2D.

How do I preserve the values from being clamped to [0,1]. How should I modify the glTexImage2D() function to get this achieved? Thanks in advance!

P.S I know the sampler works fine because when I divide the values by 1500.0f and upload it to GPU and scale it in the shader, I get the proper height map structure.

Haven’t done that yet in that way.

But this topic sounds like treating the same subject than you. That might be a driver bug.

Another thing, but very unsure if it has any relation yet with your issue is to ask you to what kind of framebuffer you are rendering to.

Change the internal format from GL_RED to GL_R32F. Unsized internal formats are always normalised formats. GL_RED will typically be equivalent to GL_R8.

It’s not a driver bug. It’s user error, failing to RTFM:

Certainly not “always”. Per the specification, the internal format is implementation-dependent. Let’s test it:

	GLuint tex;
	GLfloat data = 1.0/3.0;
	
	glGenTextures(1, &tex);
	glBindTexture(GL_TEXTURE_2D, tex);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, 1, 1, 0, GL_RED, GL_FLOAT, &data);

	GLint internal, redbits, redtype;
	glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_INTERNAL_FORMAT, &internal);
	glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_RED_SIZE, &redbits);
	glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_RED_TYPE, &redtype);
	
	printf("%s: RED/FLOAT -> %04x, %d bits, type %04x\n", glGetString(GL_RENDERER), internal, redbits, redtype);

This laptop produces:

NVIDIA GeForce GT 750M OpenGL Engine: RED/FLOAT -> 822e, 32 bits, type 1406

So this implementation chose an internalformat which preserves the user’s data. Other implementations might choose to truncate the data to 8 bits. It’s implementation-specific, and you can use the API to query what happened.

GClements answered. This question has also been asked and answered multiple times, if you search the forum.

Yup that was it!

Thanks Arekkusu and GClements!