Teaching OpenGL

[/quote]

I already said so and personally I haven’t done otherwise for a long time.

I never said it did. But you’re right, I should have made a more clear separation between setting state(i.e. bind-to-modify) and determining which state has already been set, e.g. the current number of draw buffers.

And what is a “readbuffer”.

For me, that’s a FBO bound to the GL_READ_FRAMEBUFFER target. Yeah, I know that’s ambiguous and usually refers to one ore more attachments. But personally I like to think of an FBO that permits read operations as a read-buffer and one that permits draw operations as a draw-buffer.

My point is that OpenGL hasn’t become more transparent just because some state has been thrown out. The remaining state is still as opaque as it has been when matrix and attribute stacks and so forth were still present.

The problem being talked about was specifically dealing with old state which someone set for a previous object that doesn’t apply to the next object to be rendered. The point being that the vast majority of state that needs to render something is bound into objects now. It doesn’t matter if it’s opaque or transparent; you just bind and render. You have an object, which is to be rendered with a particular VAO, using a particular shader, with a particular set of textures and uniform buffers, and rendered to a particular framebuffer.

As long as you have set these up correctly, the global state available that can trip up your rendering is much smaller than before. The most you have to look out for is your blending state. That’s a substantial improvement in the locality of data that can break your rendering. That is, if rendering isn’t working for some object, then you know it’s a problem with that VAO, that shader, those textures, those uniform buffers, or that framebuffer. Or some of the still-global state.

The list of places that can be broken is fairly small.

Are we still talking about teaching? In a teaching situation, when the students are just getting started with OpenGL, I think there is a certain improvement. They don’t need to keep track of much more than the current textures, shaders and arrays.

FBOs is, IMHO, a later step. Yes, FBOs are messy. When I teach FBOs, we don’t bother with all possible configurations, they get a working one from me and can work from there and for most, that configuration is all they need.

[QUOTE=Aleksandar;1238401]If I start with separate shader objects they won’t be aware that a concept of monolithic program exists. Personally, I’m still don’t use separate shader objects, but if it is something widely used (or will be), maybe it is better to introduce the concept as soon as possible.
[/QUOTE]
I’d like to continue here a bit. The shaders-first approach is one that I had in mind, but in the end I went for the main program, minimal pass-though shaders and working with geometry first. But I am not sure I did the right thing. The simplest thing you can start with is to start writing vertex shaders, and then fragment shaders, before you even look at the main program.

We tried the shaders-first approach in some small projects separate from my main course and that worked pretty well, so maybe I should have taken that route anyway. Any other experiences, opinions? I can consider lectures about transformations, plus shading and basic lighting, and then a lab on shaders only. Then I can move to geometry, object representation, and have the next lab on the full program level. Texturing could be done without looking at the main program, but the more input data you have from the main program, the more you want to look at that data.

I would rather start with GL context setup and accessing extensions; then shaders setup; then buffer-objects, vertex-attributes and uniform setup while having default vertex and fragment shaders. I’ll try to make a course that guides students through the pipeline, step-by-step. At the beginning they don’t have to know anything about shaders coding. They’ll have default VS and FS. After attributes/uniforms setup, they’ll start to code VS. FS still stays a black box, until the last stage. Of course, after mastering VS, they’ll know what is what in FS, but they’ll have to pass through TS, GS and TF before reaching FS. After FS a FBO will be introduced. It seems quite reasonable to follow the flow of data and introduce operations as they emerge in the pipeline.

That’s interesting, I’m thinking for the course for my co-workers to do pretty much the exact opposite direction; FS -> VS -> uniform setups -> VBO + vertex attributes -> FBO

I’ll see how it goes.

Please share your experience with us. :slight_smile:

I still think the education course should follow the pipeline stream.
GL setup -> buffers -> drawing functions -> VS -> (TF) -> TS -> GS -> (TF) -> FS -> FBO
Transform feedback (TF) may be explained just after VS, or after GS. TS is enough complex per se, so I wouldn’t make things harder at that point.

Following in pipeline stream is in fact a very logical way to understand everything. But keep in mind that this way it takes very long to see ‘colorful’ results. From a motivation point of view getting the audience (=students) to create nice graphics on there own as quick as possible is key to keep them motivated. If you start on the other end with just FS you can say “ignore where the fullscreen quad and the textures come from, today we make nice effects and pretty images on a per fragment level” :wink:
When teaching graphics with OpenGL and not OpenGL with the needed background, I would start with transformations and ignore the GL specific setup steps for the first time, quickly move down the rest of the pipeline before I go back to look at the GL specific details of buffers, attribute locations etc (kind of a middle approach of the two extremes above).

You are right! I need to do something other to keep them motivated. :wink:
FS should exist all the time. It would be very simple, but still has to support coloring and texturing. Lighting and texturing would be per vertex as long as FS becomes an active topic.

This would be a course on master studies. They already have enough knowledge about CG, transformations, lighting, texturing and legacy OpenGL.

Yesterday I read “Teaching a Shader-Based Introduction to Computer Graphics” by Ed Angel and Dave Shreider, published at IEEE Computer Graphics and Applications Vol.32 No.2 (March/April 2011), and I’m really disappointed.
There is just a brief introduction to GL2.0/GLSL1.1 (although they mentioned GL 3.1/4.1) and claim that it is feasible to make proper introductory course using shader based approach and OpenGL.
I had expected more scientific approach to evaluate program they presented. But, there is no evaluation or even a statistic how students react to a new approach. :frowning:

OpenGL is not for TEACHING, it’s for LEARNING, so never TEACH OpenGL in computer graphics courses. Computer graphics should be introduced to students as an abstracted mathematical model instead. THEN you give students assignments to do on their own using OpenGL or whatever API. Implementing things in software is the way to go for better learning.

Well first of, this thread is about teching OpenGL and not about teaching computer graphics in general, so your remark is quite off topic.

Second: even if discussing teaching 3D graphics I think we can agree on the fact that doing some practical work, implementing 3D algorithms etc. in any 3D API will help understanding the theoretical background. Sadly, from my experiance in teaching you have to force some students to do some practical stuff on there own, otherwise they will fail the tests (and have not learned anything). This means mandatory homework even for programming assignments. To be able to correct these in a finite time for lots of students (and to be able to give best support), the class has to agree on one API to work with. As there is only one modern API available that works on all major operating systems, we chose OpenGL.
On a side note: teaching the students practical stuff that lets them produce ‘nice colorful images’ is a very good motivation tool, a pure theoretical computer graphics course wouldn’t by far motivate as much people to learn about this topic.

To conclude: I beleave evyone discussing there wants to discuss teaching OpenGL and everyone has there own motivation to do so, so a meta discussion about the usefullness of doing so will not help here (you might want to open a new thread if you want to discuss the question whether teaching OpenGL is a good idea in general).

I have to disagree with everything here. OpenGL is for neither teaching nor learning; it’s for getting stuff on your screen. The abstracted mathematical model has it’s place in linear algebra and other mathematics courses; a graphics course is for - among other things - showing students one practical - and quite cool - application of the theoretical stuff. Actually writing graphics code and seeing the results creates a positive feedback loop for students, both in terms of the graphics course and in terms of seeing the immediate and practical use for the more abstract material they’ve learned elsewhere.

Maybe…I’m not sure how teachers structure their computer graphics courses these days…but don’t get me wrong if I say a course called “OpenGL Programming” or so should have no place in academia.

Back in the day at Cornell U., computer classes were split into a 3-credit theory course and a 2-credit practicum/lab course. Taking the practicum was optional. Although theory courses sometimes did have non-trivial projects in them, practicum courses were, by design, focused on huge amounts of programming. Personally I feel this removes any pedagogical objections about how/where to teach OpenGL in academica. I think the idea of sending B.S. grads out into industry without any practical skills is completely silly, and grads who can’t get jobs are bad for a department’s reputation and funding. To wit, CS majors were required to take at least 3 practicum courses IIRC. I did that and I wasn’t even a CS major, got my B.A. in Sociocultural Anthropology. :slight_smile: In my experience the practicum courses were way harder than the theory courses, although this may have partly been due to being a non-major. Nevertheless I had enough theory and practicum courses for a major. I was just a few math classes and GPAs shy of their reqs. So I don’t think I’m totally off-base to say, those practicum courses were hard.

It has been quipped that academics think everyone should have exactly the same background as they themselves had. So, get your pith helmets out!

[QUOTE=Aleksandar;1239106]You are right! I need to do something other to keep them motivated. :wink:
FS should exist all the time. It would be very simple, but still has to support coloring and texturing. Lighting and texturing would be per vertex as long as FS becomes an active topic.

This would be a course on master studies. They already have enough knowledge about CG, transformations, lighting, texturing and legacy OpenGL.[/QUOTE]
I agree with you here. I’ll be teaching co-workers a class in OpenGL in a few weeks.
The students will be engineers who want to get things up on the screen quickly.
It will be a hands on course with simple, in-class assignments, and home works requiring programming.
I will be teaching fixed pipeline GL. Shaders would be far too complex for this type of audience.

My feeling about shaders is that they are for people who want to be professional OpenGL developers.
They are not for the rest of the world who want to learn how to do simple, 3D, graphics.
I get the feeling from reading the forums, that GLSL has actually discouraged novices from
trying to learn GL programming.

[QUOTE=Carmine;1241890]
I get the feeling from reading the forums, that GLSL has actually discouraged novices from
trying to learn GL programming.[/QUOTE]

I dunno, when I got started PCs didn’t have any 3d APIs. You had to code your own software renderers. How is shader programming greatly different than that? Sure it’s hairy, but building 3d pipelines has always been hairy.

The students will be engineers who want to get things up on the screen quickly.

Then why are you teaching them OpenGL? Why would people who just want to draw lines and stuff be using a low-level rendering API? They shouldn’t be using OpenGL directly, ever.

OpenGL is for graphics programmers. Engineers should be using tools graphics programmers make for them.

For the most part, you are right. 99% of the ‘graphics’ done by engineers can, and is, done using Excel, Mathematica, or plotting software with some 3D capability. But there is a limited demand for custom, 3D simulation development to handle unique situations. At my company of ~3,000, there are 5 or 6 engineers who spend some time (not full time) developing 3D, OpenGL, applications. These apps don’t have to be as visually sophisticated as a computer game. Fixed pipeline GL addresses our needs nicely.

OpenGL is for graphics programmers.
True for GLSL. But I think the people who developed original OpenGL did not think this way. Seems like they tried to design a library that anyone with some technical bent could pick up if they were interested. I think they succeeded.

Engineers should be using tools graphics programmers make for them.
Engineers can’t be pidgeon-holed so easily. Some just go to meetings, never doing any technical work. Some do technical work running in-house or commercial software. Others spend most of their time developing software. Slowly, but surely, 3D engineering visualization is becoming an expected ingredient in engineering analyses. I’m not just talking about CAD. I’m talking about 3D representations of complex scenes with many moving objects and accurate lighting.

There is a need for either commercial or custom 3D engineering visualization capability. I think it would be a shame if GL went in a direction that discouraged learning and use by anyone except those who intended to be professional, full time, graphics programmers.

I’m talking about 3D representations of complex scenes with many moving objects and accurate lighting.

Then those people need to hire graphics professionals. Either directly or by buying middleware written and supported by them.

Oh, and you’re not getting anything remotely like “accurate lighting” from fixed-function.