Does ATI support the hardware noise function

Hi again,
Sorry to ask another question about noise but I was wondering how well the in-built noise function in the OpenGL Shading Lanaguage was supported for ATI cards.
nVidia was nice enough to give documentation that documents what they support in OpenGL 2.0
http://developer.nvidia.com/page/opengl.html
http://download.nvidia.com/developer/Papers/2005/OpenGL_2.0/NVIDIA_OpenGL_2.0_Support.pdf

6.1.1. Noise Functions Always Return Zero
The GLSL standard library contains several noise functions of differing dimensions:
noise1, noise2, noise3, and noise4.
NVIDIA’s implementation of these functions (currently) always returns zero results.

Sadly I could not find anything similar on ATI’s website:
http://ati.amd.com/developer/index.html

I am just curious to see if they support it or not just so I can get a better understanding of how widely it may or may not be used.

Again thanks for your time.

As far as i know, ATI also doesn’t,
not sure in which thread i saw.
Well, my card doesn’t. :slight_smile:

“Link successful. The GLSL vertex shader will run in software due to the GLSL fragment shader running in software. The GLSL fragment shader will run in software - unsupported language element used.”

I find it annoying that there doesn’t seem to be an official response from ATI that says which cards do and do not support it.

Until now I never knew how the system would respond to me using an unsupported noise function. It’s different from nVidia’s version that just returns 0 and keeps quiet about it.

Are you able to tell me what graphics card gave that error.

Would it be possible to maybe attempt to contact ATI about this matter?

why not generate the noise yourself like everyone else :slight_smile:

You were given an nice option in this thread…
http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=230017#Post230017

In addition to Perlin there’s also a simplex method floating about in these here forums as I recall. Pretty sweet.

Say, anyone have a nice wavelet method for the GPU? I tinkered with the one from a Pixar paper on the CPU but haven’t really followed thru on it…

My hunch is that hardware noise is still too expensive to implement robustly, otherwise we’d surely have as useful as it is.

It is probably a part of the reason.
I think also a problem with “noise” is that it can vary a lot among implementations. You may also need to tweak the seed, the frequency distribution, the periodicity, …
For example the Renderman noise() can differ among implementations, whereas most people want the Pixar’s Renderman noise ™.

Most artists don’t want a different rendering across video cards, some even complain on very subtle differences for AA or aniso filtering…

why not generate the noise yourself like everyone else :slight_smile:

This is no longer about attempting for a work around, I am thankful for the solution I was given in my previous thread and I have a noise function written in Java that I am just about to implement for the noise texture.

It’s more about how a company can fail to implement a function for whatever reason and then fail to tell their developers anything about it. If all of the sudden they release a working function in their latest cards that was somehow 10 times faster than what I have at the moment then I would like to know.

Granted I am new to the graphics scene but what if NVIDIA got the function working at their end, I then have nothing that says it wont work if I use a ATI card. I really doubt that owning all of ATI’s product range is reasonable testing solution for my level of development.

Sorry for the rant but I will be interested it what other people have to say.

Edit: Thanks for the input ZbuffeR you replied a minute before me but that’s an interesting reason as to why it must be so hard to develop a noise funtion.

No ATI card supports it in hardware.

No ATI card supports it in hardware.

Thanks I will take that advice on board but my real annoyance is why can’t ATI say that?

I presume you know that because you are familiar with ATI’s hardware and have followed their developments with GLSL.

Granted I am new to the graphics scene but what if NVIDIA got the function working at their end, I then have nothing that says it wont work if I use a ATI card. I really doubt that owning all of ATI’s product range is reasonable testing solution for my level of development.

Well, even if it were written down somewhere how would you determine this at runtime?

I think when noise is finally implemented there will be an extension of some kind exposed or a version number… something you can check for. Otherwise this would obviously be a pickle for everyone.

I used to work at ATI. Just left a couple of months ago.

I used to work at ATI. Just left a couple of months ago.

Wow now I understand the reason behind your guru status :slight_smile: So would you then class your statement:

No ATI card supports it in hardware.
as an unofficial yet very accurate response from ATI?

Well, even if it were written down somewhere how would you determine this at runtime?

If I knew what graphic cards supported the noise function that I wanted to use then could I not use glGet() with GL_VENDOR, GL_RENDERER and GL_VERSION to give me the information required in order to decided what shaders to load.

Although I am curious if this is maybe a one of case at which I might excuse ATI for or are there other functions which don’t work with ATI or possibly nVidia hardware that they have decided not to document?

It just makes it hard for people that don’t have access to an ATI card to test their work on and have no idea if a specific OpenGL 2.0 function doesn’t work on a card that says it supports openGL 2.0.

could I not use glGet() with GL_VENDOR, GL_RENDERER and GL_VERSION to give me the information required in order to decided what shaders to load.

That is a very fragile way of doing things. Vendors have an annoying tendency to change these with new drivers, even without new hardware.

Yeah, parsing these strings is, ugh, pretty. Mesa for example has a version string different than everyone else’s: “1.4 (2.1 Mesa 7.0.1)”. It does support OpenGL 2.1, but a naive parser will get 1.4 back (I think the first release of Doom3 broke due to this…)

Unfortunately, I don’t think there is a cross-platform way to retrieve information regarding the video card. What I do is make a guess using the vendor/renderer/version strings and hope for the best.

The version string issue came up recently in the suggestions forum… there was a good chuckle over how easy it is to get this wrong.

It was available on 3D labs hw. They also had their own Rendermonkey package for download that showed off dynamic loops in fragment shader. From discussion I’ve seen here, they have pretty advanced GPUs.
Perlin noise function is too slow for gaming that’s why there are simpler tricks, thus there isn’t a strong need for this function on commodity GPUs.

V-man, 3dlabs had quite advanced GPu feature-wise, but not that great on the performance department. Even with their available hardware noise, they advised you to favor a precalc texture instead if possible.

Interesting about the 3dLabs stuff, V-man. Never played with any of their hardware but it seems pretty sweet. I’ve used their shader validator tool quite a bit though.

Well, I was never an official spokesperson for ATI, but I’d say that is an accurate description of ATI hardware currently on the market.

I believe noise actually works on ATI (haven’t exactly tested recently though) but you just get software rendering.

Really? This seems pretty stupid, (what does the 1.4 refer to anyway?) and I’m not entirely sure that would even be conformant. The specification is a bit fussy and says “a version” but it’s pretty obvious that it’s intended that the OpenGL version be returned.

According to the specification it’s also OK to use GL_VENDOR and GL_RENDERER to identify hardware:

Because the GL does not include queries for the performance characteristics of an implementation, some applications are written to recognize known platforms and modify their GL usage based on known performance characteristics of these platforms. Strings GL_VENDOR and GL_RENDERER together uniquely specify a platform. They do not change from release to release and should be used by platform-recognition algorithms.

In practice I don’t think this has always been true, and it’s worth noting that it only works for current hardware, not for future. There is for instance no guarantee that future ATI hardware will have “ATI Technologies Inc.”, but it’ll probably be “AMD something” at some point.

I’d use a precalculated texture anyway even if performance was good simply because of mipmapping, which solves many hard aliasing problems that you’d get with a plain noise() function. I suppose noise() could be useful in the vertex shader though.