Official nVidia ForceWare 52.16 and nothing about GLSL

Someone with an ATI card, can you report back if your drivers have the same crazy features like one i’ve mentioned before? Because it looks like a brutal hack - people with the same driver version tell the functionality isn’t there.

yes, the stuff isn’t officially enabled. no clue why some can see it, and what they did to do so… possibly they use omega drivers?

It would be lovely to hear nVidias comment on it(GLSL, float cubemaps etc.). Cass? Can I hope enjoying this sort of stuff on my FX5600? Hovewer, I doubt thet I will get an answer…

the answer is clear: gl slang will partially be supported. as far as the hw is capable of doing so and the driver developers are capable in writing a good compiler doing so.

floatingpoint buffers in general are impossible on gfFX. only in a very restricted way. i think this is the main reason ARB_superbuffers never gets really forward.

this “bug” of gfFX forbits a lot of great solutions to be used, in a platform independent way… this is dissapointing, as it would be THE main used feature of me… and no gfFX user can run it

Originally posted by davepermen:
floatingpoint buffers in general are impossible on gfFX. only in a very restricted way.

Why? Please do not undestand me falsh, I don’t dissagree with you (I haven’t got any evidence on it), but I would like to know why. And please don’t say: “because FX sucks”.

because FX sucks…

no, but the hw support is JUST NOT THERE. there isn’t much that you can do about it… they have texture_rectangles, wich support float buffers, and thats about it. don’t ask me why. i think it was a rather dump decision that they have not plugged in general support. i don’t see the technical issue and i guess i never will.

i mean, float textures don’t have bilinear filtering anyways, so technically, you could (for float luminance textures at least), just use an rgba texture to store the one float value, and sampling would be the same, only the data interpretation would change…

and that data interpretation they DO have in hw. so single-value float textures of ALL type of textures would NOT have cost more hw at all.

the only reason i can see is that they underestimated the need of dx9 features completely… but… dave kirk (dave?) stated. 16bit floats are enough, as holiwood doesn’t use more…

first, kirk: yes, 16bit floats are possibly enough, but holiwood is able to use them as textures, too, and use them A LOT! you forgot that in your hw
second, kirk: its possibly enough to store image data, because THATS where they use 16bit. internally, they use 32bit since quite a while, and they know why. because 16bit is not usable for general purpose math, even shading.

its very sad that the floatingpoint texture support on the FX is just not there. even while slow, it would be nice if it would just be there. so we could have a nice ARB_superbuffers for everyone…

My guess is that they intended float textures to be used only as an offscreen render target for multipass applications. In such a case, mipmapping and texture wrapping isn’t needed. I agree it certainly is kinda silly since there are much more uses for float textures than just offscreen render targets.

yeah… and its always silly to just add a feature just for exactly one task…

they could just have thought:
“hm… float render target? interesting… hm… float textures? yeah… just let add float everything! i mean, uses are there for sure!”
and damn, they would have been right with that guess… and i love my ati for just allowing to upload all sort of float textures, draw onto it, and all

thats great. dropping the fixed-range once and for ever, completely.

(by the way: i’m wondering where the frequency/overclocking settings have gone from the property tab ? does anyone know how to activate it ? )

Originally posted by Zengar:
Please do not undestand me falsh <snip>

German?

DAMN! You’ve got me!

As a matter of fact, a poor ukrainian guy living in germany. I should really do something about my english. You know, the conflict between german and english, it screw everything up.

2Zengar
ATI has draft implementation of GLSlang since cat 3.6 or even early.

[This message has been edited by ayaromenok (edited 10-26-2003).]

Originally posted by Zengar:
[b] DAMN! You’ve got me!

As a matter of fact, a poor ukrainian guy living in germany. I should really do something about my english. You know, the conflict between german and english, it screw everything up.[/b]

I know how it works. I’m currently taking a course in german at the university, and whenever I don’t know the german word I tend to use a english word, at least when I try to speak. Oh well, but I’m making progress , I can say cool stuff like,
Ich heiße Humus. Ich bin zur Schule gegangen. Das macht nicht spaß, aber ich habe viel Deutsch gelernt.

i think this is the main reason ARB_superbuffers never gets really forward.

Considering that the superbuffers spec will, almost certainly, not include floating-point buffer support intrinsically (maybe as an extension layered on top of it), I don’t see how one issue affects the other. Superbuffers really has little to do with floating-point buffers, and vice-versa.

i don’t see the technical issue and i guess i never will.

That’s because you don’t understand hardware. I could think of a number of reasons why hardware would be built to make only texture rectangles work with float buffers.

Maybe on nVidia hardware, texture rectangles are something substantially more than just textures that are non-power-of-2 and can’t be mipmapped. Certainly, render-to-texture performance bears this out; on nVidia cards, rendering to rectangle textures is much faster than non-rectangles.

Indeed, that particular case has meaning. If a rectangle texture is, somehow, particularly well suited to be used as a render target, then it stands to reason that the rectangle texture path is the first to use as float buffers. After all, one of the main reasons for using a float buffer is to use it as a render target (and, of course, later as a texture). If nVidia hardware is optimized to render to texture rectangles, then it makes sense that float textures are first implemented in texture rectangles. This particular path for texture access may be somewhat more flexible for nVidia’s hardware.

I agree it certainly is kinda silly since there are much more uses for float textures than just offscreen render targets.

Not really. You have to budget your silicon. Do you want to simply not give them float buffers at all, or are you willing to spend a small quantity of the silicon on a specialized case of float textures (which covers >50% of the performance-applicable uses of float textures. Both shadow mapping and float render-targets)?

korval, of course i don’t understand hw, but then again, i DO understand that ati had NO problems implementing them everywhere, and, from software development, i know that you should always code independent working units.

thinking of THAT, i don’t see why they did it that way, and in what case it would’ve cost much more silicon. imho, its merely a wrong placement of the silicon. its just another sampling unit, that accesses at a 4x bigger scaled pointer. sampling the amount of data is no problem (bilinear filtering samples as much). sampling from another place is no problem (hw is able to scale to sample from 8bit, 16bit, 32bit textures). sampling point sampled float values is no problem. texture_rectangles proove this.

it is a ridiculous choise. but they did a lot of them in the nv30 design. its really not great designed hw. they bether would have dropped the fixedpoint part completely, and made the floatingpoint support much tighter, much more, and textures, too.

they wasted their silicon for the wrong things. thats all.

oh, und, guten tag humus. freut mich dich deutsch schreiben zu sehen. hier spricht ein schweizer. (und ich weiger mich, nomen gross zu schreiben, hehe).

Originally posted by davepermen:
oh, und, guten tag humus. freut mich dich deutsch schreiben zu sehen. hier spricht ein schweizer. (und ich weiger mich, nomen gross zu schreiben, hehe).

Guten Tag mein schweiziger Freund, wie geht’s? Ich habe am Donnerstag eine Prüfung auf Deutsch. Darüber freue ich mich nicht. Aber ich glaube es nicht so schwer ist.
Auf wiedersehen!

Whoohoo, my german rox … or not
OT …

Der Humus hat wohl gute laune… na ja , schlimm ist es nicht

BTW, can someone tell me how I enable that health status tab in the driver panel(one with temperature indicator)?. Hidden features patch from guru3d doesn’t really work and I wasn’t able to find something in the net…

Korval : I think that super_buffers will include some floatingpoint buffers by default, since its there to be used as both vertex and color buffer, and making normals with only 8bit per channel is not that good…

(speculation)
Maybe pixel_buffers is a sub extension for superbuffer, that only adds the color useage of the superbuffers, and therefor gives you contextless pbuffers and easier render to texture functionallity, but not the whole superbuffer thing…

That should be a nice solution i think, split the superbuffer ext to some smaller, where VBO still is the vertexpart and pixelbuffers the pbuffer part.

/Mikael

Originally posted by Humus:
[b] Guten Tag mein schweiziger Freund, wie geht’s? Ich habe am Donnerstag eine Prüfung auf Deutsch. Darüber freue ich mich nicht. Aber ich glaube es nicht so schwer ist.
Auf wiedersehen!

Whoohoo, my german rox … or not
OT … [/b]

Das klingt ja super! Noch nicht perfekt, aber besser als viele Kollegen die ich kenne! Ich wünsche dir alles Gute und viel Glück bei deinem Test!!

yeah, german fun in here sorry to be offtopic btw

to be ontopic again:

(hehe)

Originally posted by Mazy:
[b]Korval : I think that super_buffers will include some floatingpoint buffers by default, since its there to be used as both vertex and color buffer, and making normals with only 8bit per channel is not that good…

(speculation)
Maybe pixel_buffers is a sub extension for superbuffer, that only adds the color useage of the superbuffers, and therefor gives you contextless pbuffers and easier render to texture functionallity, but not the whole superbuffer thing…

That should be a nice solution i think, split the superbuffer ext to some smaller, where VBO still is the vertexpart and pixelbuffers the pbuffer part.

/Mikael[/b]

yeah, superbuffers by default would allow to create all sort of float-buffers. and they should then be bindable as float-textures, float-vertex-buffers, etc. this will not be really possible on gfFX, so full superbuffers support will not be possible on it.

this could be the reason why superbuffers aren’t yet there, as normally nvidia is always fast in adding new extensions.

and yes, i think arb_pixel_buffer_object will be the subpart that allows at least partial have the fun of superbuffers.

i just hope arb_pixel_buffer_object will allow floatbuffers, too :