I was thinking that a full reference implementation of the current OpenGL specification with comparability profiles supported would be a great idea that can work as a sample implementation for hardware driver developers. Besides this can be used as a conformance and quality tests for both driver and application developers, so that they can make sure it’s not the driver bug or vice versa.
Mesa3D is behind the current versions of the OpenGL API and the GLSL.
What about something the ARB/Khronos can take care of and provide it on all major platforms: Linux/Unix, Mac, and Windows.
Mesa3D cannot support OpenGL 3, because floating point rasterization and renderbuffers are patented. See the IP status in ARB_texture_float. We’ve been going through this many times before and the current status is that floating-point rendering is implemented but nobody has the guts to merge it to master. This message from Brian Paul makes it clear: http://lists.freedesktop.org/archives/mesa-dev/2010-September/002674.html
It’s a shame because Mesa has one of the fastest software rasterizers out there (if not the fastest) based on LLVM.
ARB just make the OpenGL specification, they don’t make any software per se. The ARB members don’t work for ARB full-time, they work for their respective companies (NVIDIA, AMD, Intel, Apple…). They just make a new specification once in a while if time allows and that’s it. You can’t expect ARB to write any software, Khronos is NOT a software company.
Even though Khronos is not a software company and the ARB members are just partially working on the spec, the Khronos Group makes standards and if you want to check an implementation whether it is compliant with the standard (either by having a conformance test or a reference implementation or both).
This is their responsibility and actually they are working on it (at least on the conformance test) so the Khronos Group actually does deal with some sort of software that is required to maintain their specification, of course, most probably they outsource it or whatever.
Also please don’t confuse the ARB with the Khronos Group, one is just a core team that is working on the GL spec, the other is the whole consortium with some kind of ecosystem and infrastructure behind them.
My suggestion is based on the fact that OpenGL is always advertised as the defacto standard for professional graphics programming from CAD applications to Academia to sometimes portable games. This automatically implies a software implementation which is up to date with all versions and for every platform OpenGL is supposed to be portable to.
It does not make sense to let a third party open source project with limited resources to take care of this or even an IHV who treats OpenGL as a secondary API. The ARB or Khornos or who ever creates the specification has no excuse not to provide a reference implementation unless they can say it “we dunno how to make software because we dunno what we are talkin about in dee specification.”
One more thing I want to ask the specification how has the deprecation of many features helped out the driver quality of OpenGL and support of the current version? at least on integrated graphics which already support Direct3D 9 or above…
The ARB or Khornos or who ever creates the specification has no excuse not to provide a reference implementation unless they can say it “we dunno how to make software because we dunno what we are talkin about in dee specification.”
By this logic, it is the job of the C++ standardization body to create a 100% conformant implementation of C++.
No. No it isn’t. The job of a standardization body is to create a standard. Implementing that standard is the job of people who want to implement it.
The ARB has no software development resources. Khronos has only the software development resources they hire via contracts. None of these bodies are companies with actual resources. They do what they can with what they have. So the fact that they don’t have anyone to write such a reference implementation is how they justify not having one.
how has the deprecation of many features helped out the driver quality of OpenGL and support of the current version?
NVIDIA sabotaged the removal (stop calling it deprecation!) with ARB_compatibility. Thanks to NVIDIA’s tireless efforts in making OpenGL implementation writing as hard as possible, the ARB has effectively redacted the entire thing at this point, as they continue to publish compatibility specifications and write all of their extensions against the compatibility specs.
I agree completely with this one. When I first saw one of the slides where NVIDIA shows which deprecated functionalities are hardware accelerated, it made me laugh. They know the best that those functionalities are actually emulated with core functionalities. Starting from fixed function lighting and transform that is actually compiled to GPU code in the same way as shaders, to striped lines which are actually emulated with texturing and consumes one of the texture units even if you don’t know about it.
Having deprecation boycotted made both sides unhappy:
Legacy GL fan guys always talk about how the ARB thought they remove those old stuffs (even though nothing is removed and they have everything like they had in the old time).
People who are for a modern GL complain about the fact that the deprecation didn’t really happened as they wanted (actually it didn’t happen at all).
So agree with Alfonse that you cannot expect better driver quality even if the latest released extensions still define interworking with legacy deprecated stuff. Just check the so famous NVIDIA bindless extensions. E.g.: NV_vertex_buffer_unified_memory still advertises conventional vertex attributes
People who are for a modern GL complain about the fact that the deprecation didn’t really happened as they wanted … you cannot expect better driver quality even if the latest released extensions[/QUOTE]
Come on guys. Before we rekindle this pining, again, how exactly is it going to help GL?
This was all said and done almost 2 years ago, and there never was never vendor agreement or user agreement over this move. Nobody’s holding a gun to the purist’s head and making them call glTexEnv or enable GL_LIGHTING (or use the compatibility profile at all for that matter!). And, similarly, nobody’s holding a gun to the practical user’s head and making them “not” call glTexEnv or enable GL_LIGHTING.
Let’s just get along and respect each other for having a different opinion, shall we?
Then what about this. Cannot we add legacy stuff to glu so we can have something like this:
That only runs into the same problem as conformance tests and reference implementations: who’s going to write and maintain it? The ARB/Khronos doesn’t have software development resources.
Epic BS from Alfonse. Let’s take a look at the technical deprecation issues, it cut out useful functionality: display lists, and line width [which is bizarre it is deprecated but in GL core, go figure] and some more. Additionally, there is a great deal of CAD software out there that uses the old school GL interface. Gee. Imagine that, backwards compatibility. As for making it easier to implement GL without the compatibility profile, I think it is pretty thin ice, and a BS argument. The idea “NVIDIA wanted compatibility there to make it hard for others to implement GL” is complete utter tin foil crap-trap.
At any rate, compatibility profile is optional and for new devices, when bringing GL (not GLES) the EGL standard recommends bringing GL3.2(or higher) core, not compatibility profile.
On desktop, GL is going to need backwards compatibility, simply because so much existing software is using it and there are so many, many man years expended for which the software is actively maintained and having features added. However, and in my opinion, wisely, that software is being maintained not rewritten which a core profile likely would require.
By this logic, it is the job of the C++ standardization body to create a 100% conformant implementation of C++.
Ahem. Depends. Khronons does provide a sample implementation for some, such as OpenWF. At the end of the day though, a GL implementation is a monster, so not likely one would see that.
If you are asking about getting compatibly like functionality (fixed function pipeline, display lists, glBegin/glEnd, client side vertex arrays, etc), it is taken care of: you create a compatibility profile context. If you are using the old school style profile creation functions, then this happens automatically. Under both ATI and NVIDIA, creating a context the old fashioned way does not give an “older” GL version than creating a core profile context the new fashioned way (i.e. the GL version number is the same regardless). There is only one pice of functionality (I think) that a core profile has that a compatibility profile: does not: in a compatibility profile, for one’s vertex shader, one mush have an attribute at index 0, where as in core profile this is not required. If you do not set the attribute locations (be it via layout in the GLSL or glBindAttribute), then you will have one of your active attributes using attribute 0 anyways.
Wait wait wait.
Floating point render target is patented?
That doesn’t make any sense. GPU’s execute their dingies in floating point and just write to a memory area. What is there is patent?
I can understand if the invention of floating point (16 bit, 32 bit, 64 bit) is patented because that involves working on the problem.
Is everybody crazy here? Why are you wasting your time?