Things that need to be ported to the new object model ASAP

Some of these are probably already core in 3.0, others may be planned for the LP refresh or ME. Regardless, I’m listing all the extensions that I think should really be made core features.

[ul][li]FBOs: I believe that this functionality is in 3.0, but I’m listing it anyway.[]Framebuffer blitting: A quick test shows that framebuffer blitting is nearly 5x faster than drawing a fullscreen quad with an FBO texture. NVIDIA supports the current extension on at least the 6600 series of cards, maybe even earlier ones.[]Multisampled FBOs: Control over when AA is resolved, as well as the ability to dynamically mess with the AA level without destroying the context.[]Point Sprites: Useful for particle systems, even if they are made obsolete by geometry shaders. They’re easier to use than writing a point-sprite geometry shader, too[]Texture Rectangles: Integer coordinates are needed to make textures into arrays of data. That’s more important now that there’s no fixed-functionality transformation. Especially useful when using instancing (see below)[]Bindable uniforms: These appear to be part of the new object model. I’m listing it for completeness.[]Instancing: Even if the driver has to send all the instances to the GPU, avoiding a jump or ten into kernel mode will improve application performance. I believe this is slated for ME.[]Geometry shaders: Slated for ME, just listed for completeness.[]Texture Buffers: I’d actually like to just see this generalized to data buffers. Useful for fancy effects and GPGPU. Probably going to be in ME or sooner (exists in DX10)[]Anisotropic Filtering: Hardware has supported this for a long time - why isn’t it core yet?[]Floating-point formats: Support for textures and buffer objects with full FP support. 16- and 32-bit 1-, 2-, 3-, and 4-component, shared exponent, &c.[]Un-clamped pipeline: This goes with FP textures - removing the clamping at different stages of the pipeline. Ideally, clamping could be controlled at different stages seperately (load to GPU, end of VS, end of GS, end of FS/draw to buffer).[]Transform feedback: Makes GPU-based skeletal animation a lot easier. Also good for procedural generation, phsysics simulation, and other GPGPU stuff.[/ul][/li]
Why this list, you may ask? Well, these are the features not currently in GL2 core that I’ll be using in my engine (which is on hold until LP comes out - I’ll add new features as they get ported to the new object model). The other reason is that these are all features that games are likely to use, so getting them into the object model ASAP may help more developers make up their minds between DX and GL. It doesn’t matter that GL can run on every platform known to man if it doesn’t support the features that developers want.

I think the feeback system and point sprites are going to be eradicated in LP/ME.

Originally posted by santyhamer:
I think the feeback system and point sprites are going to be eradicated in LP/ME.
Transform Feedback is the ability to capture output from a vertex/geometry shader into a VBO (render to VBO).

http://opengl.org/registry/specs/NV/transform_feedback.txt

There was an older feature known as feedback mode that had to do with re-using some of the raster contents from one frame to the next IIRC… not really used much nowadays if at all. That old mode is gone from GL3. Transform feedback is tied to Mount Evans (post GL3).

Originally posted by PaladinOfKaos:
[li]Point Sprites: Useful for particle systems, even if they are made obsolete by geometry shaders. They’re easier to use than writing a point-sprite geometry shader, too[/li]>
point sprites have their fair share of problems, like clipping, texturing and accurate point size.
So yea sure it’s a bit harder to write a small geoshader but it’s definitely worth it.

Originally posted by zeoverlord:
[quote]Originally posted by PaladinOfKaos:
[li]Point Sprites: Useful for particle systems, even if they are made obsolete by geometry shaders. They’re easier to use than writing a point-sprite geometry shader, too[/li]>
point sprites have their fair share of problems, like clipping, texturing and accurate point size.
So yea sure it’s a bit harder to write a small geoshader but it’s definitely worth it.
[/QUOTE]I’ve never had problems with them, myself… Of course, I’ve never had an problems with ATI’s OGL implementations either, so I guess I’m just lucky when it comes to graphics stuff. If the issues with them are hard to fix, though, then dropping them is probably better.

If geometry shaders are supported, then point sprites shoudl not be supported. This is very simple, never add an extra feature that can be easily done with core API.

This is too late to make threads like this anyway, the spec should be finalized already and just waiting to be published.

Originally posted by Zengar:

This is too late to make threads like this anyway, the spec should be finalized already and just waiting to be published.

For GL3, yeah, but LP-reloaded and ME are still works-in-progress.

I’ll concede the point on point-sprites (pun not intended). All in all, it’s about a 10-line geometry shader to implement them, and that includes the #extension bits.

Originally posted by Zengar:
All in all, it’s about a 10-line geometry shader to implement them, and that includes the #extension bits.
There’s also the instanced rendering path, which would involve using bindable uniforms or texture buffer objects to hold the per-sprite information.

Even if the driver has to send all the instances to the GPU, avoiding a jump or ten into kernel mode will improve application performance. I believe this is slated for ME.
Switching to kernel mode is not a GL problem. They implement it in user space.
It was a D3D problem so it appeared in D3D.

How much benifit does it give to nVidia’s GL driver?