Simplify

Originally posted by Korval:
What if the material and effect used for any particular mesh region is constantly changing? The light positions are always in flux. The number of lights change, so this will require, at least, swapping programs. Many of your state parameters are non-constant, so you’re going to have to update them every frame. And even the constant ones will change depending on changing shaders.
If you need a new effect, you have to change state, no matter what. Display lists (in the absence of shader objects) don’t make this any harder, on the contrary, they make it more efficient. To cut this short, I’m with the “hashin’n’caching JIT” crowd. The display lists act as the cache’s data containers.

I will generally not try to cache state that is continuous in nature, such as matrices, texture environment constants (or ARB_fp program environment constants for that matter). That would be silly and I didn’t mean to advocate it.

Also, display lists are not know for being particularly fast when dealing with state. Some state works in hardware, and some doesn’t. You don’t know which is which, and to set the wrong ones could lead to a massive performance loss.
I don’t know what you mean. Some things will not be hardware, right. But how can changing those states ‘directly’ be any more efficient than with display lists?

This is a potential hazard for geometry in display lists (as in “geometry in video memory”), for cards that don’t accelerate vertex shaders, and thus might require an additional system memory copy of the geometry, or worse, might even need to pull it back out of graphics memory.

Pure state is a non-issue as far as I can see.

In addition, glslang has this, somewhat annoying, feature of actually having the program object store state information. So, effectively, there’s your display list.
I’ll have to read up on this, again, I think. Is this similar to ARB_fp’s program local constants, or does it involve ‘real’ GL state?

Originally posted by zeckensack:
[quote]Originally posted by al_bob:
Some (most?) CAD apps use stippling.
Why can’t they just use texturing and alpha test, like everyone else?
[/QUOTE]

A few thoughts, stippling is gauranteed to be
available on all implementations, can be used as a method of doing order independent transparency (hacky though it is) and it performs MUCH better than alpha blending on some low end hardware still out in the field.

Until we have a nice way of doing order independent transparency with reasonable performance for models containing several million triangles per frame, stippling will still have uses for that purpose though it is low on the quality scale.

Dotted lines and such are used quite a bit
in CAD, molecular visualization, etc.

Now, since we’re really talking OGL2, yeah, I could see getting rid of stippling as long as we have a way of implementing the same thing in fragment shaders via fragment kill, via texturing, or some other similar method.

Wow I am surprised… Look like “stippling” has some entusiasts nowdays… but correct me if i am not saying a true thing : strippling cannot be emulated using alpha? Imagine a dotted line… this can be simulated using a 1D texture with alpha… that is IOT using 1bit of transparency only… then … what is the problem? kill stippling!

Seriously, read the previous posts. The stipple operation happens in pixel coordinate space not in texel coordinate space. The results are very different. Without recalculating the texture coordinates at each vertex based on the projected screen coordinates, you can’t even get close to the same results. You still can’t get the same results because the texture coordinate interpolation is perspective correct.

You’d need both a vertex program and a non-trivial fragment program to do something that the hardware can do very, very trivially. I guess I just don’t understand what the perceived benefit is to removing this particular functionality.

Hello,

automatic texture coordinate generation is very useful and can reduce the amount of bandwidth need by not having to send the texture coordinates. Since fill rate is a huge problem I don’t think we should consider droping this but instead start really taking advantage of it.

Ben

The problem is this. Stippling is useful for one thing: cheap alpha. That’s the reason for its existance. Stipple itself isn’t a photorealistic effect. It isn’t even a terribly useful non-photorealistic “stylistic” effect.

We don’t need “cheap” alpha anymore; we have the real thing. We don’t need what stipple offers. And, as it has been pointed out, a fragment program can do stippling just fine. No need to keep around duplicate functionality.

Originally posted by idr:
Seriously, read the previous posts. The stipple operation happens in pixel coordinate space not in texel coordinate space. The results are very different. Without recalculating the texture coordinates at each vertex based on the projected screen coordinates, you can’t even get close to the same results.
Exactly. Stippling is too complex on the vertex side of things. Stippling requires accumulation of an offset, which vertex programs can’t do. And they can’t do it for a reason :
(56) Should writes to program environment or local parameters during a
vertex program be supported?

  RESOLVED.  No.  Writes to program parameter registers from within a
  vertex program would require the execution of vertex programs to be
  serialized with respect to each other.  This would create a severe
  implementation penalty for pipelined or parallel vertex program
  execution implementations.

You still can’t get the same results because the texture coordinate interpolation is perspective correct.
Now that’s much easier. Do an explicit perspective divide in the vertex program and perspective correction will be gone.

You’d need both a vertex program and a non-trivial fragment program to do something that the hardware can do very, very trivially.
Fixed function transform hardware is a thing of the past anyway. And why am I hearing “fragment program” here? 1D decal texture plus alpha test. Even a Rage Pro can do it!

I guess I just don’t understand what the perceived benefit is to removing this particular functionality.
It’s an unnecessary burden. If you think it’s hard to emulate the functionality, you’re right. That’s exactly why you should feel the pain, instead of the implementors, if you want to use it.

It’s an unnecessary burden. If you think it’s hard to emulate the functionality, you’re right. That’s exactly why you should feel the pain, instead of the implementors, if you want to use it.
What kind of logic is that? I don’t hear driver developers complaining about stippling (or the fixed function pipeline, for that matter). I only hear a few programmers who use GL for games complaining because it’s not elite enough.

Originally posted by Aaron:
What kind of logic is that?

Quite reasonable logic, at least as I understand it.

First argument:

  • NV/ATI/etc have a finite number of developers.
  • Each of those developers has a finite amount of time.
  • Time spent on stippling and other such “marginal” features is time that could have been spent implementing, debugging and tuning the truly fundamental and generic stuff, like uberbuffers and glslang.

Second argument:

  • There’s an increasing, and somewhat justified, concern that defections to D3D are partly motivated by a perception that GL has become too much of mess of APIs and extensions.
  • Many of these features/extensions address very niche requirements that could be met by more generic APIs.
  • Other things being equal, simpler is better.
  • A thought-experiment: if feature X were not already in GL, would a proposal to add it be accepted? For stippling, almost certainly not.

Third argument:

  • No IHV is going to devote hardware to stippling.
  • Therefore, if stippling is implemented it’ll be on top of the programmable pipeline APIs.
  • If all IHVs are doing this, it’s duplicated effort.
  • Such efforts would be better invested in an (open source?) utility library than in the core spec.

Overall, I’d agree with the OP that at some point GL needs to simplify or die. I’ve always thought of the “ideal” GL as being the minimal subset of desired functionality that can/will only be implemented effectively by the hardware vendor. Once you start tacking things on “because somebody might find them useful” you’re on a slippery slope toward the endless requests for collision detection etc that so often grace these pages.

The original 3Dlabs proposals for GL2 had a core or “pure” spec, and a compatibility wrapper that implemented the GL1.x spec in terms of that core. This looked to me like a perfect solution.

Just my 2 verts.

Originally posted by MikeC:
[b] Quite reasonable logic, at least as I understand it.

First argument:

  • NV/ATI/etc have a finite number of developers.
  • Each of those developers has a finite amount of time.
  • Time spent on stippling and other such “marginal” features is time that could have been spent implementing, debugging and tuning the truly fundamental and generic stuff, like uberbuffers and glslang.
    [/b]

This logic is seriously flawed.
There are many many times the number of application developers than there are driver developers. Your suggestion implies that it is better to waste N times as many application developers time reimplementing the wheel, rather than driver developers providing a given feature. Don’t get me wrong, I don’t think OpenGL should take the
“kitchen sink” approach to feature inclusion by any means. Features like stippling can always be implemented with wrappers or “punt” code as various people have mentioned, but the main reason for making them a requirement is so that application developers can count on their availability, and that they generate correct results in actual use. Simplifying OpenGL is a good idea and stippling is probably only one of many items that could be contentious issues. The problem I see here is mainly one of viewpoint. Game developers have no need for many of the features of OpenGL other than drawing textured triangle meshes grin. Games are not the only application out there however, and apps that need other features should not be ignored.
Forcing all developers to reimplement
many of the existing OpenGL functions as
programmable shaders is one way to go, but the hardware and drivers available to us right now can’t even emulate the old OpenGL pipeline fully with the severe instruction limitations they have now. I’ll be more likely to accept this viewpoint when its clear that these finite resources are large enough that I can gaurantee that I can implement stippling etc in a programmable shader, in addition to whatever else I need to do. The hardware resources on existing cards are so minimal that I’m not willing to accept claims that we can just do this with shaders, certainly not if you plan to do anything else of interest in programmable shaders besides stippling. I want a gaurantee that I can do everything I’m doing now with the fixed function pipeline, and ALSO still have enough resources to do new things with programmable shading. If we’re going to give up some of these legacy features of OpenGL 1.x, then we’d better get a gaurantee that the minimum required programmable shader resources will be quite expansive so that there’s no question these things can be implemented with shaders.

Following up on my own post…
One thing that would aid in addressing concerns of eliminating OpenGL 1.x core functionality in OpenGL 2.x, would be an ARB-provided implementation of the complete OpenGL 1.x fixed-function pipeline in the form of a GLSL programmable shader. If such an implementation was made available when the OpenGL 2.x spec is being finalized, and the programmable shader resource constraints issue is fully addressed, then we’d be in a much better position when OpenGl 1.x core features are being axed. It should be a requirement for OpenGL 2.x that this ARB-provided 1.x emulator should fit entirely within the minimum required resource limits for OpenGL 2.x programmable shaders, and there ought to be some resources to spare yet so that one could do about what is possible now with OpenGL 1.5 FF-pipeline+GLSL on existing hardware, maybe a bit more.

Originally posted by tachyon_john:
Your suggestion implies that it is better to waste N times as many application developers time reimplementing the wheel

Maybe so, but that certainly wasn’t my intent. I was thinking of putting such compatibility features into a standardized, but non-core, utility library, like GLU. Core GL pretty much has to be reimplemented by every IHV. GLU, AFAICS, does not, and the fact that it currently is probably has more to do with history than anything else.

I know a Geforce 3 isn’t exactly a workstation card, but anyway. I’ve run a little benchmark and thought I’d share the results. Geforce 3 Ti200, Athlon XP2400+, PC2100 DDR-SDRAM.

I’ve used a GL_STATIC_DRAW VBO containing one vertex for every pixel in the viewport, 3 floats each. Per frame, I clear the color buffer and render a GL_LINE_STRIP with 640480 vertices into my 640480 viewport.

This takes 16ms, equivalent to a rather respectable transform rate of just below 20M verts/s. This is without vsync. I promise.
I then added these two lines

glLineStipple(1,0x9AFC);
glEnable(GL_LINE_STIPPLE);

… and now it takes 620ms. 500k verts/s. Hmmm.
Both tests produced the expected visual result btw.

while reading this somewhere along the way i thought the only way to satisfy both groups might be splitting opengl into a basic “core” and have extended feature sets for “professional” (ie cad etc.) use.

that sounded like gamer cards and drivers could concentrate on a streamlined version (read: faster, less complex and therefore hopefully less buggy) while pro cards could add the rest. but looking at how it is done today i dont think it would work. they would still work on the whole thing and only disable some features.

the idea of a core and adding the rest by making it use that core sounds better (drivers could still be streamlined).

but you already found the problem: whats to go and what should stay? i used stipple pretty much once and evaded the use of display lists so far. but someone making heavy use of them, would he like to know that now everything is done by a library in software, probably setting up hidden vertex buffers and programs?

at the same time, i think exactly that is the advantage of opengl. its much easier to learn, you start with immediate mode and handfeeding vertices, then switch to lists and at a later point can check out indexed vertex buffers. also: if immediate mode would be removed, what kind of annoying pain would it be to create a vb for every single quad we need somewhere (text rendering, gui elements, etc.)

i guess the driver developers would be in a better position to see what should be removed to make drivers less of mess and what can stay in because its not causing much extra work.

Originally posted by tachyon_john:
stippling is gauranteed to be available on all implementations.

According to the spec. But not everyone follows the spec. Case in point: try line/polygon stipple on any Macintosh with a Radeon 9600/9700/9800. Line stipple is ignored. Polygon stipple results in no fragments drawn at all.

According to arb-secretary, line/polygon stipple are not part of the core ‘must pass’ GL conformance tests.

[This message has been edited by arekkusu (edited 02-17-2004).]

[This message has been edited by arekkusu (edited 02-17-2004).]

at the same time, i think exactly that is the advantage of opengl. its much easier to learn, you start with immediate mode and handfeeding vertices, then switch to lists and at a later point can check out indexed vertex buffers. also: if immediate mode would be removed, what kind of annoying pain would it be to create a vb for every single quad we need somewhere (text rendering, gui elements, etc.)
I just wanted to say that I think Jared makes some excellent points here.

Here’s a possible compromise on the stippling issue. Make stippling undefined (or define it to be ignored) if vertex or fragment programs are used. That way, drivers wouldn’t have to run vp or fp in software just because stippling was in use.

Originally posted by arekkusu:
[b] According to the spec. But not everyone follows the spec. Case in point: try line/polygon stipple on any Macintosh with a Radeon 9600/9700/9800. Line stipple is ignored. Polygon stipple results in no fragments drawn at all.

According to arb-secretary, line/polygon stipple are not part of the core ‘must pass’ GL conformance tests.
[/b]

That’s quite interesting.
I thought the conformance tests including everything and I had assumed vendors are allowed to have some bugs. Of course, you can’t demand perfection.

It’s quite ennoying if a program simply doesn’t work like this. They should run in software mode if they haven’t implemented the feature.

I have seen games running on the MAC and one was making use of fog. There were plenty of graphical glithes, specially on distant objects.