GL_FLOAT_RG16_NV + glDrawPixels

I’m trying to load data into a texture with internal format = GL_FLOAT_RG16_NV

 
g_poFbo->BindFramebufferObject(true);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT);
glClear(GL_COLOR_BUFFER_BIT);

glWindowPos2i(0, 0);
glPixelZoom(1.0f, 1.0f);

glColorMask(GL_TRUE, GL_FALSE, GL_FALSE, GL_FALSE);
glDrawPixels(g_iWidth, g_iHeight, GL_RED, GL_FLOAT, pfRed);

glColorMask(GL_FALSE, GL_TRUE, GL_FALSE, GL_FALSE);
glDrawPixels(g_iWidth, g_iHeight, GL_GREEN, GL_FLOAT, pfGreen);

glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
g_poFbo->BindFramebufferObject(false);

When I read the values out they are off in the 4th decimal place. Is this normal ?

When I use GL_FLOAT_RG32_NV everything is fine.

glDrawPixels(g_iWidth, g_iHeight, GL_RED, GL_FLOAT, pfRed);

writes 32 bit floating point values and you seem to be using a 16 bit float texture.

When I use GL_FLOAT_RG32_NV everything is fine.
In that case, it is something like a memcpy

dimensionX:
I’m trying to load data into a texture with internal format = GL_FLOAT_RG16_NV … I read the values out they are off in the 4th decimal place. Is this normal ? When I use GL_FLOAT_RG32_NV everything is fine.
Yes, this is normal. As V-man alluded, that’s about as good as you can expect using half-float components (e.g. RG16).
[ul][li]16-bit half-floats (e.g. RG16) = s10e5[/li] …so 2^-10 = ~1e-3, so about 3 sigfigs[li]32-bit full-floats (i.e. IEEE single precision, e.g. RG32) = s23.8[/li] …so 2^-23 = ~1e-7, so about 7 sigfigs[/ul]
If you don’t want the values to change at all during the upload, pre-encode them in half-float and then upload them to the GPU in that format.