Im wondering why opengl sometimes uses different types as normal C++. Why is it that sometime we use GLuint or GLsizei and not simply int? Whats the difference?
And is there a difference between BOOL and normal bool or TRUE and true? In VS6, the only difference seems to be the color (lowercase being blue). Can I use one or the other all the time or is there a difference between the 2?
OpenGL has its own type for platform independance reasons. C++ has a loose definition on internal representation of data types. That means that the size of an int (the C++ type) can vary across platform. By using GLint, an application is sure to get the same thing everywhere. (it is the GL header on each platform that matches GLint to whatever platform-specific type that has the desired size).
As for the BOOL and bool : BOOL is a Microsoft typedef used in the Win32 API; and bool is a native C++ type. Why have both? The main reason is that C has no boolean type.
Originally posted by kehziah:
By using GLint, an application is sure to get the same thing everywhere.
The specification only guarantees a minimum size, not a spcific size. See table 2.2 on page 9 in the lates specification. So you’re not really guaranteed to get the same thing everywhere.
In practice though, I would say you’re likely to get the same thing everywhere, but there’s no guarantee
Thanks for the correction.
An implementation may use more bits than the number indicated in the table to represent a GL type. Correct interpretation of integer values outside the minimum range is not required, however.
I must say I don’t really understand the last sentence.
Actually, I’ve been thinking about the meaning of that particluar sentence myself for quite some time. It’s not that I don’t understand the words, but it’s the implications of the words, like what really could happen if one passes a value that are outside the minimum required range to a function, and which functions are really affected. I would be glad to know the exact meaning of it too.