This is done when the GPU reads GPU memory. Its stored as bytes/channel if that’s what you ask it to do.
It doesn’t just pre-expand the data and store your texture as floats in GPU memory, unless you ask it to.
I have a floating point gray-scale image. Is there any advantage to storing it as a single-byte gray-scale, or should I leave it as a floating point since it will be converted to float anyway?
Refer to the above. In one case you’ll be consuming 1 byte/texel GPU memory. In the other you’ll be consuming 4 bytes/texel GPU memory.
It’s only when you sample the texture in the shader that the GPU will hot-convert the ubyte representation to float for purposes of that specific texture lookup only!
So GL_R32F_ARB and GL_RED should take the same amount of space in memory because both are converted to normalized floats internally?
No, not unless the GPU defaults GL_RED to GL_R32F. GL_RED is not a specific internal texture format. GL_R8 is. GL_LUMINANCE8 is. GL_R32F is. They imply a specific format for the texel data, not just the number of channels.
I would have thought GL_R32F_ARB would take 4 times more memory since that would be 32 bits, while GL_RED should be 8.
That’s what I’d guess too, but it depends on what internalFormat your driver comes up with when you say “GL_RED”. Better to tell it GL_R8 or GL_R32F if you have something specific in mind (or GL_LUMINANCE8/GL_LUMINANCE32F).