GL_MAX_EVAL_ORDER limited to 8 under Microsoft

Hi,

I try to use OpenGL evaluator functions, glMap and glEvalCoord. I was negatively surprised how small the value of GL_MAX_EVAL_ORDER is. Under Microsoft OpenGL API I got a value of only 8 if I execute the following code

	GLint value;
	glGetIntegerv(GL_MAX_EVAL_ORDER, &value);

I remember the values from the Linux execution of about 30 or 32. Does someone know why the OpenGL limitation deviates so strongly between Linux und Microsoft? Are there any suggestions how to overcome this limit?
Thanks in advance.

OS is irrelevant - what’s your hardware?

I have NVIDIA. Which hardware information do you need exactly? Graphic adaptor driver name?
NVIDIA GeForce GT 620. Driver version 21.21.13.7306. Driver date 01.10.2016
Graphic adaptor too new for OpenGL?

On each can you give me the output of the following:

glGetString (GL_VENDOR);
glGetString (GL_RENDERER);
glGetString (GL_VERSION);

glGetString (GL_VENDOR); -> Returns “NVIDIA Corporation”
glGetString (GL_RENDERER); -> Returns “GeForce GT 620/PCIe/SSE2”
glGetString (GL_VERSION); -> Returns “4.5.0 NVIDIA 373.06”

This is the normal value. It should be the same on Linux. See this link:

http://feedback.wildfiregames.com/report/opengl/device/GeForce%20GT%20620

On Linux you might have been using a different OpenGL implementation (ie Mesa), or a different hardware… These functionalities are old and might be less supported now (this is a big assumption).

Thank you for this valuable Information.
Sure, my Linux code worked on the different hardware (NVIDIA as well but 10 years ago)
Well, for me it is unclear why the implementation on such a new hardware is unable to offer at least the same limitation as 10 years ago.
From written above I can only conclude that only evaluators implemented by myself can make a progress in that deal.

[QUOTE=amaltsev;1285946]Well, for me it is unclear why the implementation on such a new hardware is unable to offer at least the same limitation as 10 years ago.
From written above I can only conclude that only evaluators implemented by myself can make a progress in that deal.[/QUOTE]

From this page (http://feedback.wildfiregames.com/report/opengl/feature/GL_MAX_EVAL_ORDER#10), it seems that it might be due to the fact that you are using the lowest version of the GT 6xx. For example GT 640 and 650 can have 10.

On mesa you can expect higher values because this function is emulated on software. And it seems that ATI and intel have an hardware which support a higher value.

This functionality is old and deprecated since GL 3. So you should not rely on it. You can simulate them or use tesselation shaders in order to have hardware acceleration.

As a side-note and if I remember well, even with GL 2.x people were not using (or relying on) them that much. But I might mix things up here…

OK. Very nice. Thank you silence. Please do not get me wrong. The following text is not addressed to you.
When a feature is deprecated or not more supported it would be great to have a tip in the documentation. Neither OpenGL reference page glEvalCoord nor other resources e.g. Microsoft https://msdn.microsoft.com/de-de/library/windows/desktop/dd373527(v=vs.85).aspx provide such tips.

[QUOTE=amaltsev;1285954]OK. Very nice. Thank you silence. Please do not get me wrong. The following text is not addressed to you.
When a feature is deprecated or not more supported it would be great to have a tip in the documentation. Neither OpenGL reference page glEvalCoord nor other resources e.g. Microsoft https://msdn.microsoft.com/de-de/library/windows/desktop/dd373527(v=vs.85).aspx provide such tips.[/QUOTE]

I understand your point. A couple thoughts on that.

If you’re somewhat new to OpenGL and don’t really know which of the older OpenGL APIs are efficient and which aren’t, you may just want to avoid them for now and stay with the APIs defined in the OpenGL Core Profile. Modern GPUs and GPU drivers should generally perform well with those.

There are two easy ways to tell which APIs are in the Core Profile.

  1. First, the OpenGL Wiki (OpenGL Wiki) has a Core API Reference Page (notice that it has 2 pages). It links to man-page like entries, one for each API. If you see one of these man-page entries in the wiki for an API, then you can be pretty sure it’s in the Core Profile.

  2. Alternatively, you can just look in the authoritative source, the core profile specification: OpenGL 4.5 Core Profile Spec. If it’s not in there, it’s not in core.

That said, once you get more experience, you may at some point find that you’re reading or working on software that uses some of the older OpenGL APIs. There, you’ll find the OpenGL Compatibility Profile specification a helpful tool: OpenGL 4.5 Compatibility Spec

As always, links to all of the OpenGL specifications and the OpenGL extension definitions can be found on the OpenGL registry: http://www.opengl.org/registry

…nor other resources e.g. Microsoft https://msdn.microsoft.com/de-de/library/windows/desktop/dd373527(v=vs.85).aspx provide such tips.

Oh, and one final bit of advice. Never trust Microsoft on anything having to do with OpenGL. They’ve been trying to submarine its use since the early Windows days. I’d recommend you get your information from www.opengl.org, www.khronos.org, and OpenGL books and tutorials.

This might be referenced and documented also.

My understanding is that Microsoft actually wanted (and needed) to support OpenGL in order to break into the graphics workstation market, but that what actually happened was that the NT and Windows 95 teams had a disagreement, with the NT team having a licence and driver model for OpenGL which they refused to share with the '95 team. Hence the way things tuned out.

This actually perfectly explains some of the weird communications that came out in the mid/late '90s where Microsoft were trying to big-up OpenGL as a high-precision CAD API but unsuitable for games. Of course they didn’t want to kill OpenGL because they badly wanted NT workstations in the CAD market (and remember that NT didn’t even run D3D at the time).

It may seem odd for two internal teams in the same company to have such a disagreement, but my own experience of dealing with Microsoft in other contexts is that this kind of thing goes on all the time.

[QUOTE=mhagain;1285973]My understanding is that Microsoft actually wanted (and needed) to support OpenGL in order to break into the graphics workstation market, but that what actually happened was that the NT and Windows 95 teams had a disagreement, with the NT team having a licence and driver model for OpenGL which they refused to share with the '95 team. Hence the way things tuned out.

This actually perfectly explains some of the weird communications that came out in the mid/late '90s where Microsoft were trying to big-up OpenGL as a high-precision CAD API but unsuitable for games. Of course they didn’t want to kill OpenGL because they badly wanted NT workstations in the CAD market (and remember that NT didn’t even run D3D at the time).

It may seem odd for two internal teams in the same company to have such a disagreement, but my own experience of dealing with Microsoft in other contexts is that this kind of thing goes on all the time.[/QUOTE]

Some parts of your post woke up some forgotten things about that old passed… However Windows NT ‘died’ when Windows 2K came, which was the first Windows from NT family to have good full support of directx. So it was in 1999 or 2000 or so. Such a long time ago. They then had 17 years or so to improve their documentation.