OpenGL 3 Updates

I don’t have much to say about the arguments over alphatesting/texture formats/etc (though I will say that I think glVertex3f() and crew need to go away. No point in having them anymore) but I would like to give a big thank-you to the guys at Khronos for not leaving us in the dark any longer! Even if it has been delayed it’s great to know that it’s only because you’re making it better!

[quote=“ector”]

Every major game out there for the last 5 years. The performance increase it offers over uncompressed textures is pretty dramatic, and as a bonus consumes less memory. The slight quality loss is very often worth it. [/QUOTE]

If you want the performance increase and save graphics memory you can use a generic compressed format and let the card handle the details. No need to explicitly require S3TC support.
Recent research has created compression schemes which give better quality than S3TC, but don’t need more memory (ETC2 is an example).

Philipp

S3TC is a strange requirement, because as was said previously as the format is patented. To me, the decision sounds more like politics getting in the way. Mesa3d will probably work around the issue by substituting S3TC with a different compression scheme behind the scenes.

As PkK said, the best solution would be to support compressed formats and let the driver choose the best implementation (S3TC, ETC2 etc), according to the hardware’s capabilities.

The problem with leaving double in however is that it looks like the hardware can do it, and right now afaik no hardware can (not even the G92 based chips and probably not the new AMD ones either when they appear) which puts us a good year away from it being useful and instead confusing matters for an API which is meant to be closer to how the hardware works.

Double will be back I’m sure, just not for a year at least, during which time we are due at least 2 more updates to GL (Long Peaks Reloaded and Mt. Evans), plenty of time to add it back in.

Will EXT framebuffer object be core?

FBO like functionality is at the core of GL3 for rendering.

As PkK said, the best solution would be to support compressed formats and let the driver choose the best implementation (S3TC, ETC2 etc), according to the hardware’s capabilities.

That would be a fatal choice. That would mean you would have have to ship your app with all textures uncompressed and compress at load/install time.

Do you have any idea how long it takes to do quality S3TC compression? On our current app it would take hours to do this. It is second only to shader pre-compiling. (but that is another discussion)

Also each compression scheme has different trade offs - you would have to specify stuff like “is this a color map?” “is this a normal map”? “is this a height map?” to enable the driver to select a appropriate compression format.

Good point.

I am not sure what you are trying to say. Decisions like selecting the correct format for normal maps, height maps, font textures etc are simply unavoidable.

Edit: Your name is breaking forum quotes :wink:

I like alpha-tests and already feared they might be removed. Good to know it will stay.

What i’d like to know more about is the “default state” as mentioned above. It said you can “render a simple polygon without specifying vertex-buffers…”. I would like to know more about what the default-state will allow. E.g. is simple texturing (not multi-texturing) available, too? (i doubt it)

Also, i’d like to know everything about context-creation (multisampling, adaptive antialiasing, etc.) and how the “drawable” interacts with the window-system. Using FBOs all the time and then only “presenting” the result to the window-system is what i would like to do in the future.

And i’d like to know what general information one can query from the system. For example gfx-memory, vender, renderer and, of course, whether the extension-system has been modified.

A list, which current extension will be “core” in 3.0 would be nice. In general a “minimal-requirements” listing.

For the PR department:
I think for OpenGL 3 to get a bit more excitement, outside of these forums, it would be great, if there will be a new logo (maybe the old one, but revamped). It should be available in several formats and at low to very high resolutions, for people to use it on websites, in games (option-menus…) etc.

A small video featuring an animated OpenGL logo would be very cool, so that game-developers can play it at program-startup (like the nVidia, ATI, etc. logos).

This would allow people to show off, that they are using OpenGL 3 and thus it will be more present to the public.

Jan.

this too is what interests me the most. i also think that framebuffer creation and handling should be passed to the gl and only the result blitted onto the OS surface.

For the PR department:
I think for OpenGL 3 to get a bit more excitement, outside of these forums, it would be great, if there will be a new logo (maybe the old one, but revamped). It should be available in several formats and at low to very high resolutions, for people to use it on websites, in games (option-menus…) etc.

A small video featuring an animated OpenGL logo would be very cool, so that game-developers can play it at program-startup (like the nVidia, ATI, etc. logos).

This would allow people to show off, that they are using OpenGL 3 and thus it will be more present to the public.

great idea! :smiley:

A small video featuring an animated OpenGL logo

Maybe we should start a contest for GL3 static and animated logo in a separate thread then?
Now let me think… Should we present OpenGL as something bright and futuristic (bright future?) or something dark and powerful… :wink:

What i’d like to know more about is the “default state” as mentioned above.

I don’t think the whole “default state” thing exists to be usable. That is, you aren’t expected to actually use it in any real application. It is there to make sure that the context is viable from GL startup.

The only piece of default state that you might find useful is the default framebuffer object.

k_szczech: Yes, the idea to make it into a contest came to my mind, too. Depends on the resources the ARB can dedicate to such a task. Though, it might be difficult to find skillful artists among programmers…

Korval: I agree, it will certainly be that way. But i’d still like to hear the details from the ARB in the next pipeline newsletter.

Jan.

I am not sure what you are trying to say. Decisions like selecting the correct format for normal maps, height maps, font textures etc are simply unavoidable.
[/QUOTE]

I think what he wanted to say is that the compression algorithm may vary depending on data interpretation.

DirectX way looks great on paper, but real D3D games resort to checking card/vendor id and using it to pick graphics settings from prepackaged database. Hell, even Microsoft’s own games do it, so we can take it as testament on how “future proof” the DirectX way is.

I think what he wanted to say is that the compression algorithm may vary depending on data interpretation. [/QUOTE]

Yes, that’s why decisions like this are unavoidable :slight_smile: At some point, you’ll have to decide which compression format to use (if any) for each asset. I don’t see how S3TC (as a baseline format) will affect that decision.

DirectX way looks great on paper

No it doesn’t. As I pointed out in the next paragraph.

Further, I was making a joke :wink:

@Stephen A: S3TC is basically a decompression, not a compression algorithm. Compression quality depends on the compression algorithm (and that may vary depending on data type). I would gladly bring a real world example, but my experience with S3TC is rather limited. But imagine, that the quality of r and g is more important to you than the quality of b. You can actually write a compression algorithm that would work this way, but you can’t hope on automatic driver compression to be your best choice.

OpenGL 3 will not have support for the GL_DOUBLE token

That’s not a problem really. Double precision could be added using a future extension.

S3TC is a required texture compression format

I still prefer to pass raw RGBA data and let the driver to compress with the best format available(given some hints). With things like CUDA the texture compression can be done very fast so the loading speed won’t suffer too much… just a bad thing… the artists could not preview the final texture quality… I think here is where the S3TC can help.

16 bit floating point support is now a requirement for textures and renderbuffers. Supporting texture filtering and blending is still optional for these formats.

Ok, but please add some function like DX has to know if blending/filtering is supported for a specific texture format. Not sure if OGL3 gonna use the software fallback in case something is not supported(I heard not, which is good… better emit an error… and add a debug layer + emulation layer like DX ).

I can’t wait to see the pipeline news!

Ok, but please add some function like DX has to know if blending/filtering is supported for a specific texture format.

Did you actually read the thread? Or anything that’s been written about GL 3.0? We’ve gone over this a lot, and it’s been confirmed by the ARB as to how this will work.

It’s not the same, but everything you can do with alpha test can be done with discard, and vice versa.

What depends on the GPU? Fragment kills are well defined.

Still hardware, but it wouldn’t surprise me if it would be removed in say DX11 or DX12 hardware.

And in fact, you can do this already in Compressonator. You can specify the weighting between the channels. I’ve only needed to use that once, but boosting importance of blue removed the banding in a blue gradient for one texture.