im trying to send an array to glsl through a texture. i send the texture as an integer 3D texture (GL_R32I). when I apply to a bigger value, it behaves weirdly. while the first half are rendered correctly, the other half is just a mess up.

int startPointValue = (int)startPointValuev4.r;
if(startPointValue*1.0 == (19049758.0+0.5) ) color = vec4(0.0, 0.0, 1.0, 1.0);

it render a little blue spot on my texture, which means there are time when startPointValue(integer casted to float) has a decimal value.

Im not sure what happened, does openGL has a limited capability on handling 3D Integer(32 bit) Texture? When using RGBA unsigned byte format, it works properly even for a bigger 3D Texture. Did I miss anything? Anyone ever send millions of integer to GLSL(by texture of whatever)?

Single precision floating point only has 24-bits of significand precision.
19049758.5 is 1 0010 0010 1010 1101 0001 1110.1 in binary, which requires 26 bits of accuracy.
When stored in a float the closest values that can be represented (with the first ‘1’ implied and the remaining 23 bit stored along with an 8-bit exponant and a sign bit, making 32 bits total) are 1.0010 0010 1010 1101 0001 111 (1.9049758e+7, or 19049758) and 1.0010 0010 1010 1101 0010 000 (1.9049760e+7 or 19049760).
So the 0.5 is simply lost because a single float cant store a number that large to a good enough accuracy.
Even larger numbers will be rounded off more, eg. the nearest values to 1,000,222,463 that can be stored as a single precision float are 1,000,222,400 and 1,000,222,464.

Im not sure what happened, does openGL has a limited capability on handling 3D Integer(32 bit) Texture?

The moment you cast it to a float it stops being a 32-bit integer. And it is therefore subject to the limitations of IEEE-754 single-precision floating-point. If you want to retain the 32 bits of precision, you have to keep it as an integer. That means doing integer math on it.