float vs. GLfloat


Are there any performance hits by passing a float type to a GL function that specifies a GLfloat? Does the driver do any kind of conversion, or are the types identical for all practical purposes?

As a matter of practice, I use GLfloat, but I was curious as to any performance degradation by interchanging the two.



// cut directly from GL/gl.h
typedef float GLfloat;

In other words, a GLfloat is identical to a float, and I can hardly think there is any performance differences.

And what’s about double vs. float?


well, not QUITE. there’s been another post here—somewhere—about this, but glfloat is not ALWAYS an alias for float. It is, usually, but NOT always. Check the other post on this. (In exactly the same way that sizeof(char) is NOT one for ALL architectures).

but, casting between double/float does have SOME performance penalty.


Yes, you are right on that one. On some platforms an integer might be 64-bits instead of the, today most common, 32-bit. If you declare a variable as an int, it MIGHT end up with a 64-bit integer, depending in architecture. But declaring a variable as GLint will guarantee a 32-bit integer, independent of architecture. So I suppose the same goes for floats then.

The sizeof(char) is always 1. It is the number of bits in a char/byte that is implementation defined. This number is CHAR_BIT, found in <limits.h> I believe. Thus to see the bit size of any integer, you can use (sizeof(type) * CHAR_BIT).

If I recall correctly, OpenGL uses GLfloat internally. Thus passing any type other than GLfloat to OpenGL will require conversion.

If you want complete portability, use GLfloat instead of float. However, I cannot think of a major system that would be running OpenGL where GLfloat is not the same as float. I guess it depends on where you fall on the correctness vs. practical use scale.

[This message has been edited by Nocturnal (edited 03-09-2001).]

Using float and GLfloat interchangeably should almost certainly not be a problem.

  • Matt