Its not spitting out the right numbers at all… I know that an IEEE float is stored in some kind of “binary scientific notation” and I kinda think that may be the problem but I can’t say for sure. Anybody know what could be wrong?
I think your right GPSnoopy. BTW, your code snippet is a very elegant solution. Wish I’d thought of it. As long as I don’t ever store any data as a float (until the very end) it should be okay.
btw, V-man–I think you may be confused. My code converts between endians just fine.
Yes, V-Man is confused. Swapping endianness is equivalent to shuffling bytes around.
IEEE 32-bit floats are always stored in 32-bits, by definition; even the illegal values. You can detect an invalid float if the exponent bits (23-30) are all 1. In a nutshell, bit 31 is the sign bit (0 = +, 1 = -), bits 23-30 are the exponent + 127 (so -127 is 0, and +127 is 254), and bits 0-22 are the mantissa with an implicit 1. A denormalized number has a 0 exponent and no implicit 1, if I recall correctly. A 0 is represented by all 0s, except for the sign bit which may be a 0 or a 1 (ie, negative 0 is allowed).
Every system I’ve worked on in the last 15 years have had the htonl() library calls and friends. No need re-inventing the wheel when all the swapping you need is in the library. Assuming, of course, you’re not trying to read a little-endian file format on a big-endian platform…