GeForce GTX 280... what for us, programmers? :)

This is it, GeForce GTX 280 announced, reviews read, well the real excitement is: “what could we do and how?” :slight_smile: I want OpenGL extensions! nVidia is always fast for this so I guest it will come soon…

if any!

I mean, it seems that the only new feature is double precision floating numbers… Not really exiting for real-time but I’m sure it could be useful for some persons… for scientists… for CUDA.

Moreover it doesn’t seem that double precision images is possible… so no render to double precision framebuffer. It seems that the stream-out could be the only way for double precision result. I guest double precision buffer could be available.

I hope they will be more, let’s see, but if anyone have an extension log from a GeForce GTX 280… please share it on this post ^_^.

That is undoubtedly the best part of extensions, getting a new card (and sometimes just a new driver), and watching that extension list roll for the first time.

I thought the GTX 280 didn’t support anything new over the G80 core? :eek:

EDIT: Ohh nice, doubles… does this mean we’ll see 64-bit integer textures too??? :smiley:

Great! Now, if they were actually able to implement 32 Bit float textures that don’t kill your performance, i might see a point in adding 64 Bit support.

Oh and another thing: Render-to-texture that INCLUDES early-z and early-stencil tests in non-trivial use-cases. Now THAT would make their hardware somewhat useful.

The whole GXT 280 thing reminds me of 3dfx: “hey, we don’t need to improve our chips, lets just put MORE of them on one board!”

I doubt that new chip is anything good. It’s just brute-force. And that never worked in this kind of industry.

Jan.

Is it possible to do ANYTHING out of the ordinary that doesn’t kill the optimizations such as early-z/etc.? :slight_smile: I hate having to design around that. Too much to remember. :frowning:

Technically speaking i suspect that the GXT 280 does not only have a new naming system and double precision floating numbers, it might have gotten rid of the last fixed function hardware, and now does graphics in “software”, that might be why there is little mention of new extensions.
Any nvidian can correct me if im way out of line here, but if that is true it means that as fast as they can create new standards for new extensions they can implement it with the only restriction being total system processing power.

“it might have gotten rid of the last fixed function hardware”

Do you mean blending operations and maybe filtering? Because that the only parts I can see that would be programmable. I guest that on Radeon HD cards the blending is already programmable… by them hardware ready, extension isn’t.

Maybe the input assembler would become programmable (with DX11) for tessellation unit purpose but I don’t think that most of the ROPs would ever be (except blending.)

I’ve just read in an article of an german gamer magazine (www.gamestar.de) that the gtx 280 partly supports directx 10.1. So there should be some new features, but I couldn’t find any more detailed information on that yet.

I think that G80 also support partly DX10.1 because I think that GL_EXT_draw_buffers2 is a feature of DX10.1 and supported by G80 already

Actually, it doesn’t. From the spec:

While this extension does provide separate blend enables, it does not
provide separate blend functions or blend equations per color output.

I believe DX10.1 allows separate blend functions/equations per output. :slight_smile:

By output does it mean component? I.e. red, green and blue. Basically an independent alpha channel per color component, instead of a single one for them all. And if so, does it actually work in textures? By that I mean if the separate alphas can be specified in the texels, that is, working from textures, not on them.

Yeah, I’m curious now :slight_smile: That functionality can be used for some interesting effects (through not very common ones).

Well yes, but other little things as well like the rasterizer or texture fetch, it’s not just the programmable things (or possibly future programmable), it’s all things, the G80 line really removed most of it but i do think this new hardware is just a central processing unit, a load of subprocessors and a fancy memory interface, if they wanted to they could run linux on it.
And a design like that has few limitations (of what you can do, speed in a whole other matter).

I’m fairly certain that it means per-MRT-output, not per-component. :slight_smile:

Ow, I really wanted that one :frowning: It shouldn’t be hard to design through, it would be a matter of repeating some circuitery… No idea how would that impact in costs through. Shouldn’t be much >_> but who knows, specially with shaders in the way.

This is why we need to be given the current framebuffer color as an input into the fragment shader :slight_smile: or, well, maybe a blend shader stage. The possibilities would be endless… not to mention glBlend* would be deprecated. :smiley: (Of course, one could ping-pong between FBOs, but that’s icky :X)

Yes, but how would that work with multiple samples? a fragment is not a pixel…

A blend Shader would be a good idea IMHO though… the GPU could load in the framebuffer pixels in advance and hide the latency completely… it wouldn’t need to be slow and would be really useful for stuff like post processing effects (and maybe deferred rendering?)