I’m a little stuck on the specifics of OpenGL versioning. A bit of background…
I’m writing an application (using old-fashioned WinAPI) and I’m working on converting the graphics system to OpenGL. Originally, I did all the math on the CPU and used GDI+ for rendering, which worked fine until the polygon count of my tests started increasing. It proved to be far too slow when dealing with ~20,000 triangles.
Since I’m new to OpenGL, I started with the NeHe tutorials, which, as I later found, are quite outdated. So as of now, it’s written in pretty basic OpenGL 1.1 fixed function pipeline. Not too much has been done, so conversion is not a big problem.
As I understand it, OpenGL up until version 2.1 (or perhaps 3.0, not counting the deprecations) is built in “layers”; you would only need the latest version that supported all your needs. However, 3.1 and after completely discard FFP, so I’m not sure where they fit in. Also, I’ve heard that some Intel integrated GPUs only support 1.5, but modern GPUs only emulate FFP… eh, I just don’t know.
Now, what I’m asking is: Given that I only need the functionality of OpenGL 1.1 (it’s nothing fancy, just a lot of brute force polys), would it be better for me to stay with the FFP, convert to OpenGL 2.1 shaders, or take a step further and discard all deprecated functionality in favor of OpenGL 3.1+ (in the interest of future-proofing)?
The program is a data visualization aid intended for academic/scientific research purposes, so ideally I’d like it to work on as many computers as possible, not just those that have the latest hardware.
Thanks for your time.
If you want to support old Intel GPUs, use GL 1.1. Some of them support GL 1.4 but the drivers are too buggy so using new functionality is a problem.
Use display lists to render your objects (if they are static). It should work fine, even on old Intel chips.
You might want to check what GL version is supported on your target computer.
Yes, NeHe tutorials are really old.
Thanks for your response, V-man.
About the display list approach though… I am generating my objects mathematically (with z as the output of a function that takes x and y as parameters) and displaying them like one might display terrain in a game. If I wanted to “animate” this while allowing simultaneous user-controlled orbit, would I use many pregenerated display lists or a VBO?
If by animate, you mean to change the vertices, then you should not use display lists because you would have to recompile a display list and that is slow.
VBOs have been introduced in GL 1.5. If you want to, use VBO.
Okay, if that is the case, I probably will just ignore the old Intel GPUs and step up to using OpenGL 2.1; custom shaders will provide some flexibility in my display options, I think. It’s probably unlikely that I would hit upon an old Intel GPU anyway, since research budgets usually allow for computer upgrades (and laptops are less common).
Thanks again for the pointers.
If you rely only on desktops, or on Intel’s CPUs based on Sandy or Ivy Bridge (the last two generations of i-processors/HD2000+), then stepping further to GL 3.3 is much better solution. Not because of deprecation, but because of advantages.
If I am limited to OpenGL 3.2 at the highest (due to a binding restriction on one of my intended platforms), would it still be a good idea to step up to that point?
EDIT: I work primarily on an older i7 laptop (Nehalem microarchitecture) with a dedicated GPU, but my target computers will likely have Sandy Bridge or newer. I’m assuming that the Intel integrated GPUs will be irrelevant if they have a relatively up-to-date dedicated GPU – is that right?