What do you think of the future of OpenGL?

Originally posted by SirKnight:
[b]I noticed in that article it says FUBAR means “****ed Up Beyond All Repair.” Funny how it also doesn’t say that it also can mean “****ed Up Beyond All Recognition.” That latter way is usually what I know it as. Both ways are correct though. Actually I think they say “… Recognition” in Saving Private Ryan, that’s how I figured out what FUBAR meant back then.

-SirKnight[/b]

Perhaps Recognition is the medical term and Repair is the technical term. I’ve heard both & Recog. tends to be relating to a person while repair tends to relate to equipment etc. (But they’re interchangable)

Oh, and I’m really glad we’re discussing the future of Gl again - it’s been almost a week since the last thread…

What I don’t understand, it’s why so much people want an API to become the only one??
I personally think existence of 2 APIs is just as useful for the industry as existence of PAL and NTSC, meters and inches/foots/miles, grams and pounds, etc. That is, in some cases it doesn’t bother you at all, in other cases it’s a pain in ass, but it is never a good nor useful thing.

If they were 3 or 4 …etc… it would be a pleasure to make its own choice and apply
Isn’t it?
I imagine driver developers heve enough trouble with drivers for 2 APIs, and you would like them to have 3 or 4 times bugs to fix :stuck_out_tongue:

Originally posted by john:
I think the problem you’re having wiht whatever version of g++ you’re using is that g++ now conforms to the offical C++ standard. From my understanding, C++ has been slowly evolving and has only ~very recently~ been locked to THE official ANSI C++ coding standard. I used to have some of the changes in my head, but now I can’t think of anything. ;-(

And it only took the gcc maintainers 6 years to finally “support” the standard. My experience with gcc has been that after 2.8.1, it just has been garbage. Heck, even 2.8.1 sucked on many platforms (especially SPARC). A certain hardware/OS vendor’s abominable mangling of gcc for their latest OS is even worse than the main branch as regards C++ “support”. MS Visual C++ and Metrowerks’ C++ compilers are far better. Too bad Metrowerks has apparently dropped their project to port their compiler to Linux.

[b]
Anyways, you could try using kgcc instead of g++. kgcc tends to be an older version to compile the kernel.

[/b]

Yep, kgcc these days is usually one of the pre-2.8 stable gcc releases, since the kernel can’t be compiled with experimental code generators.

gcc’s quality of late has just been so shoddy that I’m embarrassed to have once been a fan of it.

As for Linux itself, I used Linux exclusively for about 8 years or so. I finally got tired of having to fight with my machine to do things that computers are supposed to make easier (this is why Linux is still mainly used by hobbyists and fringe elements). I also got tired of the way that these “open source” folks who scream at you about “freedom” and so forth are so adamant A) about forcing their views down everyone else’s throats, and B) that they never ever code bugs and therefore any fixes or enhancements submitted to them are the work of Satan. I’m surprised anything gets done in the “open source community” given the fascist stance so many of these folks take. My other boxes don’t get any use.

But that’s just me.

Okay, off my soapbox now.

Oh, and please send flames to /dev/null.

Speaking of performance, is there something like EXT_compiled_vertex_array, NV_vertex_array_range, NV_vertex_array_range2 or ATI_array_object in D3D ?

~velco

Okay, now for my on-topic post.

The future of OpenGL… I think the fact that the OpenGL specification has been revised very little, overall, in what? over 10 years now?, says quite a lot for its stability and usefulness. As someone else pointed out, there are lots of “old tricks” in the existing spec that can be used rather creatively.

I’m pretty excited about the ratification by the ARB of the vertex_program and fragment_program extensions (and even more so that both Nvidia’s and ATI’s latest drivers support them, though fragment_programs is mysteriously absent from Nvidia’s 41.09 release, while ATI has both), which is partly due, no doubt, to pressure from the competition of Microsoft’s D3D8 and 9 features. However, there is probably more credit due to the games developers for pushing for these features in OpenGL. I know there are a great many developers who will most likely always refuse to use D3D for various reasons.

The main reason against D3D is portability, of course. If your application absolutely must run on the widest range of platforms possible with the minimum amount of re-coding, then the choice is easy: OpenGL. If you’re only concerned about Windows, it’s not so simple, since you no longer have the portability issue.

Another point strongly in favor of OpenGL: D3D is still, for all the work MS have done to simplify the API and make it easier to use than it used to be, a horribly complicated API to get up and running with. The amount of setup code you need to write – even still – is more than you really ought to have to do to get a darn library ready to work with. OpenGL is still the clear winner here, particularly since it’s just as easy to use from C as from C++. D3D can be called from C, but, since it was designed as a C++ API, implemented in C++, you have to explicitly reference the vtable in the object whose methods you’re calling. OpenGL doesn’t make you do that, since it was designed as a C API, so it’s usage is identical either way.

The biggest thing D3D has going for it, in my opinion, is it’s being one part of a suite of multimedia APIs. While you can use any of the suite independently of the rest, I think a lot of developers who use DirectX exclusively do so because after all the setup work to get the DirectX libraries ready to work with, they probably want to “get their money’s worth”, so to speak.

I think that OpenGL will always be with us, and will continue to gain features as the industry drives it to. D3D will continue to improve, with new features added there, too, and that will also drive some of the new features OpenGL will gain in the future. But, keep in mind that recent versions of D3D had lots of stuff pruned from them, while OpenGL has never (to my knowledge) had official, “core” features removed. The graphics card vendors’ driver implementations will also continue to improve, and all of us will improve our code, too.

I think there will always be some disagreement in the developer community as to which API is “better”. Which is “better” is really a matter of opinion – I happen to prefer OpenGL as a matter of course, while many of my colleagues prefer D3D. I’m probably biased because I learned OpenGL first, and when I first looked at D3D, it was really awful. That first impression is really difficult to overcome. Besides which, it’s only been recently that I’ve been doing work on the Windows platform (everything before was on one version of UNIX or another), and, since OpenGL is there and I know it already, I’m comfortable continuing to use it. I’ve played a bit with D3D, but I don’t think I’ll be making the switch any time soon.

Velco: yeah, it’s one of the most basic things in D3D. When you create a vertex or index buffer, you specify the pool it’s going to be used into. The default pool is using video or AGP memory (static / dynamic VBs/IBs).

Talisman: agreed about portability. I’ll also add my contribution: ugly, long type names. I disagree about the setup length: you can do very complex stuff if you want, but if you want to have it up running quickly, it’s even faster than OpenGL. The C vs C++ thing is obviously true, but it can be seen as both an advantage and a disadvantage (ie. you’ve got an OOP API).

Y.

Originally posted by Antorian:
[b]What a subject

What I don’t understand, it’s why so much people want an API to become the only one??

I use OpenGL and D3D a bit and it seems good for both…
Why Only One?
If they were 3 or 4 …etc… it would be a pleasure to make its own choice and apply
Isn’t it?

Ok I know, I know… I’m french ok…but…
why not ?
Those two Graphic Libraries work fine and can be improved.
So let’s go… use OpenGL , use D3D whatever you want, and please still use your pencil to draw

Sorry for this Filozofikal" mind of a desperate crazy programer [/b]

Antorian: To tell you the truth, I too want atleast two APIs to be available, widely available. Since I do D3D and OpenGL, both, I have no quarell against D3D, not at al, well… one thing: to tell you the truth, I don’t like the COM style of it :\ Atleast I hope that both survive and that perhaps even some new ones would emerge. The reason I asked the question was that I just wanted to hear the opinion of people as to which MIGHT become dominant, I hope it will be neither, but dominance tends to occur, it happened with internet browsers, atleast on Windows. And since we’re talking about Microsoft’S DX, I get a déjavu feeling :\ Anyways I like both and I do both and I hope both will survive.

Cheers!

Originally posted by Talisman:
And it only took the gcc maintainers 6 years to finally “support” the standard.

Huh? the C++ standard is from 1998, the C standard from 1999. What six years ?

Yep, kgcc these days is usually one of the pre-2.8 stable gcc releases, since the kernel can’t be compiled with experimental code generators.

Way off. 2.95.x, RedHats 2.96, 3.0, 3.1, 3.2.x compile 2.5.x kernels just fine.

gcc’s quality of late has just been so shoddy that I’m embarrassed to have once been a fan of it.

At least SPEC results (SUSE – Open-Source-Lösungen für Enterprise Server und Cloud | SUSE) do not show agree with you

~velco

Originally posted by Talisman:
gcc’s quality of late has just been so shoddy that I’m embarrassed to have once been a fan of it.

Are you sure it’s not your code that’s shoddy?

GCC 3.2 is a bit picky on some stuff, but it’s not a bad compiler at all.
(MSVC6 is/was worse)

@richardve / @all:

BTW: has anyone already tested the new Intel C++ 7.0 compiler ? In an article, they said that it is up to 30% faster than gcc (and that means in unspoken text: xxxx% faster than msvc??)
i think there is a eval-version available; is it worth to give it a try ?
any experience ?

Originally posted by DJSnow:
[b]@richardve / @all:

BTW: has anyone already tested the new Intel C++ 7.0 compiler ? In an article, they said that it is up to 30% faster than gcc

It is not bad. Not at all.

I ran some integer benchmarks and it appeared somewhat worse than gcc-3.2.1.

I ran also some floating point benchmarks (couple of matrix multiplication algos). It has a distinctive advantage here, especially vectorizing loops. It was worse than handcrafted GCC builtins code (e.g., __builtin_ia32_mulps, etc), but then again one could code exactly the same with ICC builtins.

~velco

EDIT: In any case changing algorithms to optimize cache usage had way more impact than each compiler’s optimization.

[This message has been edited by velco (edited 02-24-2003).]

icc is supposed to be better than gcc on (surprise, surprise) p4’s. http://www.coyotegulch.com/reviews/intel_comp/intel_gcc_bench2.html

But supposedly gcc is catching up. http://kerneltrap.org/node.php?id=583 http://people.redhat.com/dnovillo/spec2000/gcc/global-run-ratio.html

I just emerged icc. I’ll give it a fly (if it works with my makefiles). Don’t expect as thorough a benchmark as the ones above.

Originally posted by velco:
[b]Speaking of performance, is there something like EXT_compiled_vertex_array, NV_vertex_array_range, NV_vertex_array_range2 or ATI_array_object in D3D ?

~velco

[/b]

How about vertex buffers?

to sirknight:

as of yesterday I used to have gcc2.96, now I have gcc3.20 and my s/w that compiled perfectly happily under the old gcc dies horribly when I try recompiling under 3.20. The reason my code was dying was that the STL stuff is bound under the stl namespace, and that namespace no longer appears to be with’d by default. My code compiled after I added

using namespace std;

after I #include’d some standard library stuff (like <iostream>, for instance). So, give that a shot and see if gcc3.20 is more ammenable to your code.

cheers
John

Originally posted by richardve:
[b] Are you sure it’s not your code that’s shoddy?

GCC 3.2 is a bit picky on some stuff, but it’s not a bad compiler at all.
(MSVC6 is/was worse)[/b]

I admit I haven’t tried 3.2. Of the gcc versions I have played with post-2.8, they’ve mostly been junk.

Whomever posted the C++ spec date of 1998: my copy direct from ISO says 1997… could be a typo, I suppose. It’s now 2003, and there was a post that gcc “only recently” came up to spec on C++, so without any other definition of “recently” (as well as 3.1 blowing major chunks on C++ “Hello World” when I tried it), I made the estimate of 6 years.

I don’t want to get into any arguments over this, so if the current spec says 1998, cool, I’ll assume my copy has a typo. As for gcc’s quality, well, I gave up on Linux a couple years ago for various reasons, so except for Mac OS X (where gcc really severely sucks at C++), I don’t have any reason to muck with gcc (and Metrowerks is better on Mac anyway). The Intel compiler is pretty good on Windows, and Microsoft’s compiler is decent as well. I haven’t yet run into any of the conformance problems that others keep harping on in MS compiler, so I can only assume that these problems either aren’t particularly common, or they’re just inconsequential. In any case, I can’t argue that they don’t exist, only that I haven’t run into them. Besides, does anyone really use partial template specialization? I’m using primarily VC7, not 6, so I can’t comment on 6 vs gcc.

The SPEC benchmarks have been used for both sides of all arguments, so I’m just not going to go there.

Incidentally, I agree with whomever posted the bit about redhat’s modifications to gcc making it pretty much useless. I always had to get the unmodified source and bootstrap the compiler myself. It’s a royal pain in the butt on non-Intel systems, but worth it if you’re serious about developing on Linux.

Yeah I’m going to have to get a REAL copy of gcc and overwrite this RedHat modification one I have. It looked kinda crazy the install process of gcc in linux, definitely not “click setup and watch it go” kind of thing.

Too bad I can’t run VC.NET in linux. I like MS compilers a lot. I never have had any problems with them, easy to use, the best debugger there is, etc. I think they are great. To me Visual Studio is the best thing MS has. But that’s just me.

-SirKnight

Originally posted by SirKnight:
It looked kinda crazy the install process of gcc in linux, definitely not “click setup and watch it go” kind of thing.

Don’t write “Linux” when you should write “RedHat”.

RPM based distros have been known to be crap. Whoever use one of them can’t argue.

I have no problem installing gcc on a regular basis, I just type “emerge -u gcc” and I watch it being upgraded (every month or so). I don’t need to do any other things (such as the mess with alternative versions on Mandrake or RedHat).

Julien.

Well reguardless if it’s redhat or not, it’s STILL linux. I know there are differences between them all but ultimately it’s still linux.

Anyway I figured out why my programs for my class weren’t working. Ok this is what I was doing:

ifstream file;
file.open( filename );

Use the file

file.close();

file.open( differentfile );

Use the file with my data structure

file.close();

Stuff…

Ok turnes out that it wasn’t opening my second file there, when it SHOULD have been. I have done programs like this on many differen’t compilers before and it works fine, but not with gcc 3.2. What I had to do was create another object of ifstream type called file2. Then it would open my second data file and the program worked perfectly after that. Now WTF is up with this? I should not have too create an object of ifstream or ofstream, depending on what i’m doing, for each file I plan on opening. Once I finish w/ one file, I should be able to use the same ifstream object to open something else. Wierd.

BTW, I just finished d/l’ing the latest gcc, 3.2.2. I d/l’ed the whole package, it was a tar.bz2 at 19mb. Ok so now I have a folder called home/myusername/gcc322/gcc.3.2.2 (well I’m not sure exacly about that last folder name, it’s something like that anyway). Ok so in that last folder contains all the gcc stuff. Now how the heck do I upgrade my current gcc with this new one? The install docs on the gcc.gnu website doesn’t help at all. All they do is confuse me. I tried doing what they say but it didn’t work. I don’t know wtf they are talking about anyway so that doesn’t help. So if someone can help me out here it would be great. If you want to email me this instead of keeping this threading growing then go ahead. Doesn’t matter to me.

-SirKnight

SirKnight:

I. The following program works just fine with “c++ (GCC) 3.2.2 20021228 (prerelease)”.
#include <iostream>
#include <fstream>

int
main()
{
int i;
std::ifstream f;

f.open (“foo”);
f >> i;
f.close ();

std::cout << i << std::endl;

f.open (“bar”);
f >> i;
f.close ();

std::cout << i << std::endl;

return 0;
}

II. Do not mess with package manager on your distro, i.e. do not override your distro’s gc c installation, but either uninstall it (but after building a new compiler) or leave it alone.

III. Building GCC;

a) Assuming your sources are in gcc-3.2.2, choose some working directory, e.g. ~/build/gcc. It is important your working dir is other than the GCC source dir and is not a subdirectory of the GCC source dir.

b) Decide where you want the new GCC installed.
/usr/local, /usr/local/gcc, ~/opt/gcc are common choices. That dir is hereafter reffered to as $prefix.

c) go to your work dir

cd ~/build/gcc

d) configure GCC build

$srcdir/gcc-3.2.2/configure --prefix=$prefix --enable-languages=c,c++

e) build GCC
make bootstrap

f) install GCC
make install

g) add the $prefix/bin directory to the PATH

h) enjoy

~velco

Originally posted by SirKnight:
Well reguardless if it’s redhat or not, it’s STILL linux.

Linux is just a kernel. RedHat Linux is an operating system coming with a Linux Kernel and a whole bunch of GNU packages. Gentoo Linux (my current OS) is a different operating system, as well as are Debian Linux, etc.


Now how the heck do I upgrade my current gcc with this new one?

I’m not into RedHat so I don’t know the details, sorry.
However, I don’t know if you actually want to upgrade the version of gcc coming with your distribution: gcc’s C++ ABI is moving, and it only recently got somewhat stabilized. Replacing your gcc with the latest one might break many things in your OS.
Play the safest move and install this version beside the other one (and create a small script, maybe, in order to update the /usr/bin/gcc, etc. links. Maybe RedHat uses the /etc/alternatives scheme?).

The good point of Gentoo is that we got one and only one version of gcc at a time (the bad(?) point being we must recompile everything with that new gcc if the ABI changed )

Julien.