Official feedback on OpenGL 4.5 thread

Alfonse, welcome back. You haven’t changed much :slight_smile: We missed you.

Barthold

Alfonse, glad you keep up and review the new release with your report and awards.

Am I reading your criticism of Geometry Shaders right If it seems to say Geometry Shaders are useless?
Are you considering Geometry Shaders obsolete and useless?
Would it be fine if they be deprecated in favor of other, newer ways to achieve the same things?
(Please do mention what other, newer ways and API functions that will be to be complete so other people can comment to make sure no use case is missed.)

(If they are, Geometry Shaders shouldn’t be in OpenGL NG and deprecated from OpenGL 4.x)

The only use case I can come up with for GS’s, one that can’t be solved with either AMD_vertex_shader_layer/viewport_index (again, assuming it could work with a TES) or NV_geometry_shader_passthrough, is rendering geometry to cubemaps. That is, projecting each primitive onto the six faces of a cube.

The benefit of a GS here is GS instancing, which allows multiple instances of a GS to operate on the same input primitive. There might be a way to stick similar instancing functionality into the VS or TES, but that would really depend on the hardware.

Would it be fine if they be deprecated in favor of other, newer ways to achieve the same things?

No. Deprecation and removal does not work. And they’re not going to do it again.

I’ve used the GS stage to generate per-triangle normals on the fly for certain data sets; that’s certainly been quite useful (it’s still not as fast as just including the normal in your vertex format though, but I consider that a fault of the implementation or the hardware rather than of the concept of the GS stage).

I’ve used the GS stage to generate per-triangle normals on the fly for certain data sets

I’m curious about your need to do that on the GPU. If your data set was pre-computed, then surely the normals could be as well. And if your data set was computed on the GPU, then whatever process that computed it could also have given it normals, yes?

In any case, it would be possible to do this with the TES. You’d just pass outer tessellation levels of 1, that effectively cause no tessellation. Granted, you would be invoking the tessellation primitive generator, only for it to do no actual work. So I don’t know how it would compare, performance-wise. I would guess that the GS is faster on a 1:1 basis.

Which brings up an interesting GS vs. tessellation question. Is it faster to do point sprites in the GS than it is to use the TES to do them (one-vertex-patches, “tessellated” as quads)?

How about: Using a geometry shader with multistream output to perform geometry instance culling, LOD selection, and LOD binning (emphasis on the binning here).

aqnuep blogged about this 4 years ago, and for quite a while now, this can be implemented very efficiently with a geometry shader (no CPU-GPU sync).

I remember that. I even made an oblique reference to it in my original post (“used to do things like frustum culling and the like”). But then I mentioned that you can do all of that just fine in a Compute Shader. A computer shader logically fits better, because you’re not rendering; you’re doing arbitrary computations. You don’t have to pretend that you’re doing vertex rendering and capturing primitives as output.

Compute shaders don’t have to fit the output into a small set of bins equal to the number of streams; they can write drawing commands directly. So the CS version has more functional advantages as well.

Plus, the same hardware that can do Compute Shader operations can do indirect rendering, so you get even more of a performance boost with that.

My point isn’t that GS’s are useless. It’s that there is very little a GS can do that other things can’t do at least as well.

I like its use for this:
http://www.humus.name/index.php?page=3D&ID=87
Note how well it antialiased the alphatested objects, too.

Interesting.

I came to realize that actually, with GS passthrough or vertex_shader_layer/viewport_index, you don’t even need GS’s to rendering the same primitive to multiple layers. Though you do have to use regular instancing to do it, which means that simultaneously using instancing for its intended purpose becomes rather difficult.

Given this, the valid use cases for GS’s would seem to consist of just generating per-primitive parameters (as seen in the edge distance from Humus’s site).

[QUOTE=Alfonse Reinheart;1263862]The only use case I can come up with for GS’s, one that can’t be solved with either AMD_vertex_shader_layer/viewport_index (again, assuming it could work with a TES) or NV_geometry_shader_passthrough, is rendering geometry to cubemaps. That is, projecting each primitive onto the six faces of a cube.

The benefit of a GS here is GS instancing, which allows multiple instances of a GS to operate on the same input primitive. There might be a way to stick similar instancing functionality into the VS or TES, but that would really depend on the hardware.

No. Deprecation and removal does not work. And they’re not going to do it again.[/QUOTE]

Sure sparked an interesting discussion.
Humus’s use of Geometry Shader is very useful.

For the record, I wasn’t actually going to suggest removing geometry shader.

Where can I find out what

DX11 emulation features : for easier porting of applications between OpenGL and Direct3D.
is about exactly ?

https://www.opengl.org/sdk/docs/man/html/glBlendFuncSeparate.xhtml states that the parameter “srcRGB” “Specifies how the red, green, and blue blending factors are computed.”. However, I do think the correct wording should be “Specifies how the red, green, and blue source blending factors are computed.

Please post all bugs in our bug tracker: https://www.khronos.org/bugzilla/

Hi I’m new to the forum but have been coding for over thirty years, started on basics and moving through MSDOS and VGA etc then onto windows.

I have a couple of questions:

  1. With the ‘new name’ thread will 4.5 be the last version of OpenGL?

  2. I currently code using C++ and I’m wading through OpenGL 1.X which I understand is deprecated, will support for the earlier OpenGL APIs ever be dropped in OpenGL?

[QUOTE=EdzUp01;1280040]
2) I currently code using C++ and I’m wading through OpenGL 1.X which I understand is deprecated, will support for the earlier OpenGL APIs ever be dropped in OpenGL?[/QUOTE]

The Core profile of OpenGL drops support for all the deprecated 1.x and 2.x features. Some vendors support the Compatibility profile, which allows you to use the deprecated features. If you don’t want any of the deprecated features, you need to request a core profile using glX or wgl CreateContextAttribs(), using CONTEXT_PROFILE_MASK_ARB set to CONTEXT_CORE_PROFILE_BIT_ARB.

So basically we also got to move to OpenGL 4+?

My laptop only supports 2.1 at present.

[QUOTE=EdzUp01;1280052]So basically we also got to move to OpenGL 4+?

My laptop only supports 2.1 at present.[/QUOTE]

Then use that. Nobody’s forcing you to use OpenGL 4.x, nor is anyone forcing you to use the core profile.

Remember: OpenGL is just a document, a specification. It’s not a library. Your implementation is a library. Whether it is being supported is up to whomever it is that writes your implementation.

[QUOTE=Alfonse Reinheart;1280053]Then use that. Nobody’s forcing you to use OpenGL 4.x, nor is anyone forcing you to use the core profile.

Remember: OpenGL is just a document, a specification. It’s not a library. Your implementation is a library. Whether it is being supported is up to whomever it is that writes your implementation.[/QUOTE]

Ah OK it’s the ‘its deprecated by some vendors’ section sort of rings that some graphics drivers may have trouble running the software written in old gl versions. If it works fine and won’t be killed off in the future all the better :slight_smile:

[QUOTE=EdzUp01;1280052]So basically we also got to move to OpenGL 4+?

My laptop only supports 2.1 at present.[/QUOTE]
Maybe it’s time to buy a new laptop? Technology improves over time.