A few questions about OpenGL and freeglut

Good day everybody,

I’m just starting to learn OpenGL. Because I work on multiple platforms I decided to give freeglut a try. Now I’ve got a few questions for you I hope that are not too dull to answer. Don’t hesistate to point me to good literature on those topics as I started out reading with a few tutorials that I thought lacked a bit in detail.

My first two are about small portions of my code. In my implementation I use glutDisplayFunc(&renderScene) and glutIdleFunc(&renderScene) in the main function where renderScene is the following:

    static float angle = 0.0f;



    glTranslatef(0.0f, 0.0f,-10.0f);

    angle += 0.7f;

        glVertex3f( 1.0f, 1.0f,-1.0f);
        glVertex3f(-1.0f, 1.0f,-1.0f);
        glVertex3f(-1.0f, 1.0f, 1.0f);
        glVertex3f( 1.0f, 1.0f, 1.0f);


How are the commands after glLoadIdendity processed in general? When I change the order glTranslatef, glRotatef and the Quad are called I get different results where the desired is the one by the code above. In my example would it be: T * R * K, where T is the translation, R is the rotation and K are the coordinates (for homogene coordinates)?

It took me a while to find out how to correctly initialize the depth buffer. I am now setting GLUT_DEPTH in glutInitDisplayMode in the main and using glClearDepth(1.0), glDepthFunc(GL_LEQUAL) and glEnable(GL_DEPTH_TEST) in renderScene. Is this the correct setup? At least the presentation of depth seems to be correct now.

My last question is about freeglut in general. How strong is it that means how many features of OpenGL does it provide?

Thanks for your help in advance :smiley:

Hi. I am too am a little new to OpenGL, but maybe I can help you out, at least give you something to think about while more experienced people reply.

I think how it works is that glLoadIdentity sets the transformation matrix to the identity matrix. The transformation matrix is what each vertex that you send between glBegin and glEnd is left-multiplied by. So right now the transformation matrix doesn’t change the vertices at all when you do T * v (transformation matrix * vertex), it’s just the identity matrix. Essentially, v = T * v.

Now, calling glTranslate and glRotate will modify the transformation matrix, so that it’s no longer merely an identity matrix, but the matrix will now translate all the vertices that it left-multiples. Now you will get something like v’ = T * v, the vertices have all been rotated and translated and whatnot, so the resulting quad that gets drawn will be a little different than the vertices you put between glBegin and glEnd.

To answer your first question: The order in which you send those commands DO matter.
For example, if you did:

then you would draw the quad before the glRotate changes your transformation matrix to add some rotation.

If you understand what each of those commands do:
glLoadIdentity - resets the transformation matrix to identity
glTranslate - adds some translation to the transformation matrix
glRotate - adds some rotation to the transformation matrix
the quad code with glBegin/glEnd - actually draws those vertices
using the current transformation matrix. The GL_QUADS enum tells OpenGL that you want to use those 4 (if you send more multiples of 4 number of vertices, it will draw more quadrilaterals) vertices to draw a quadrilateral.

you should be able to figure out what’s going on.

Now, I’m not sure about the following stuff, but here’s some more things to consider:
The glRotate and glTranslate functions apply translations and rotations to the current transformation matrix. You don’t change around which transformation matrix you are currently using, so that’s not a concern for the simple example you set up. But eventually when you start using different transformation matrices for different things, like adding 3d perspective to the things you draw via OpenGL. That’s by using glMatrixMode.

For your next question, I think you are setting up depth testing correctly. At least, that’s the standard setup that people tend to use. You need to do
only once, but you should clear the depth buffer, glClearDepth(1.0f);
every frame (if your application will draw more than just one frame.) You typically do this the same time you clear the color buffer.
Of course, you should try to develop an understanding of what is actually going on with the depth buffer and depth testing.

For your last question, you should realize what freeglut is, what OpenGL is, and how they are related. OpenGL is an API that gives allows you to take control of graphics hardware (if it is available), draws basic shapes, and colors those pixels in certain ways. And it’s your job to use these basic tools and learn advanced techniques, like lighting, texturing, to create cool things.

OpenGL is not a piece of software. It is merely a specification, which means whoever designs OpenGL states what OpenGL should do. It is up other people, typically hardware manufacturers like Nvidia and AMD, to actually make an implementation for OpenGL, by making drivers for their graphics card that can understand and perform OpenGL functions and whatnot.

Now what OpenGL does NOT do is specify, how should what OpenGL draws get sent to a window on a particular OS. Because how to open a window and put stuff on that window is not cross-platform, so that’s not OpenGL’s job to specify. That’s where freeglut comes in. For example, Windows uses something called WGL to create a “context” for OpenGL to use. A context contains all the inner workings that OpenGL needs in order to work. I think an “OpenGL context” also means the window that OpenGL will actually draw to. OS X uses CGL and X11 (Linux) uses GLX. freeglut automatically determines which OS you are on and creates the appropriate OpenGL context for you to use. And you only need to type in a little bit of freeglut code like glutDisplayFunc(&renderScene) to finally start using OpenGL in your environment.

So how “strong” freeglut is isn’t really a good question. I personally don’t use freeglut, so maybe freeglut could have some bugs or limitations that prevent you from using certain parts of OpenGL, but I’d say what really limits you from using certain parts of OpenGL is the actual OpenGL implementation (on the driver or whatever). For example, if you have an older graphics card, you might not be able to use the recent features that came with the recently released OpenGL 4, kind of like how only the newest graphics cards can use DirectX 11.

Once you get into the world of OpenGL extensions, which are extensions to the original OpenGL specification that aren’t made the Khronos group (the people that develop OpenGL), but other people like Nvidia or AMD or Microsoft. These things add additional functionality to OpenGL. Whether you can use these extensions or not depend on what driver is installed on the machine that actually runs the application, NOT on your computer where you develop the application. You will see that certain drivers support certain extensions and some don’t. In general, however, graphics card hardware is separated into “classes”, which is what “major” number of OpenGL they support. Like for Nvidia, GeForce 8 series and later can support all versions of OpenGL 3. GeForce 400 series and later can support all versions of OpenGL 4 or later.

Final words: Take all this stuff with a grain of salt, I’m still learning and I’m just trying to tell you what I’ve learned so far. I’m still very much a newbie to OpenGL, and some of what I said could be inaccurate.

More importantly, I see you are stuck using an older style of OpenGL, called “immediate-mode rendering” that uses glBegin and glEnd. Once you start making serious applications, this style of OpenGL coding incurs a huge performance penalty, because your vertex data is on the CPU. If you are familiar with locality of reference in programming, you know that you generally want the data you are using to be close to where its going to be used. This current way of coding OpenGL with glBegin and glEnd is not good for that. There are ways to use OpenGL such that the data you use is sent to the graphics card and stays there, rather than fetching data from the CPU every time.

So, if you want to learn OpenGL the right way, the modern way, check out this guy’s tutorials:
They are very well written, and more challenging than other OpenGL tutorials, but if you are serious about graphics programming, it’s definitely a good read.
Edit: If you do end up using those tutorials, they also do away with the “fixed-function pipeline” approach to graphics programming. Which means they do away with things like glTranslate and glRotate, amongst other things. It’s in general, a more general, programmable, approach to graphics programming. You will have to write shaders yourself and do the matrix math yourself (to a degree, but don’t worry if you aren’t familiar with matrix math or linear algebra). The author of those tutorials goes into these topics in depth.

Good luck!

Another thing, the order in which you do transformations DO matter. For example, let’s say you have a point on the origin. If you translate it 3 units to the right, then rotate it 90 degrees, you’ll get something totally different if you rotated it 90 degrees first, then moved it 3 units to the right.

They are processed in the order that you call them, just like any other library processes function calls in the order that you call them.

That’s fine. If you want, you can move those calls to a INit() function that you call only once because you do not seem to be changing them during your program runtime.

At the Wiki tutorial page, I used freeGLUT in one of the examples.

You can see me calling InitGLStates().

I do not understand. You want to know if freeGLUT is good? Yes, it is good. It is good.

Thank you for your explanations, juggernaut(!) and V-Man. They really helped me a lot and I think about the first two points amongst other things clearer as I see the connection between theoretical and practical approach.

Using the init functions in the main gave me incorrect results so far. After reading your posts I tried moving around these functions in main. My problem was that I used them before glutCreateWindow.

Wow, thank you for pointing this out. And this might be the book I was looking for as you seem to have already developed a good understanding of OpenGL. I do know about matrix math and linear algebra. I am especially interested in using them for my practical approaches.

What I mean is does it limit the capabilities of OpenGL in any way?

Edit: Made my points a bit clearer (hopefully).[/b]