noise()'s in GLSL?

I know that noise() aren’t supported in GLSL under Nvidia/ATI cards, but when if ever will they be? AFAI can tell noise() still isn’t working on my 8800GTX card.

My guess is that true hardware noise will require expensive specialized silicon (perhaps in the next generation).

Shader noise is very doable, especially now with all that added uniform space in the G80. Probably a good bit quicker to boot.

Check this pdf,at page 7

Noise Functions Always Return Zero
The GLSL standard library contains several noise functions of differing dimensions: noise1, noise2, noise3, and noise4.
NVIDIA’s implementation of these functions (currently) always returns zero results.

It seems that NVIDIA has not implemented this till now

AFAIK there is something patent related … 3DLabs maybe ?

maybe Wildcat graphic card has been implement in its hardware core.

I know that 3dLabs hardware has noise() support, just not consumer hardware… Maybe quadro and fireGL have it?

In general I would suggest just using texture lookups for noise. Not only because it will run in hardware, but even if hardware supported noise there would be plenty of cases were a noise texture lookup would be preferable simply because you can mipmap it and thus avoid many aliasing problems that you’d see with a pure noise call.

Antialiasing of procedural textures is a bit tricky, but perfectly doable. Off-line rendering has been doing it for ages. The key is to compute the partial derivatives Dx() and Dy() of the texture coordinates. There is a chapter on it in good RenderMan books, and it is featured in the annual Siggraph course on RenderMan SL.

What do you want to use noise() for?
I’m struggling to find a use for it.

Usage of noise ? Any kind of small scale procedural detail. Use as random dirt, or as alpha mask between 2 or more textures.
An example here go to the image right below “fine bump noise” here, and compare to the one above :

It can be interesting to avoid to use (and design!) a texture sometimes.

no, i’ve no problem with understanding the use of noise itself in asset creation, just not unnecessary asset creation and re-creation 60 times a second for millions of pixels. You wouldn’t, for example, render a 3 million poly leaf into a texture every frame if the leaf never changed - you’d use a leaf texture. Just leave it in the asset creation stage.

Just leave it in the asset creation stage.

Unless you can find something better to do with the space reclaimed from a 2048x2048 texture.

There are two purposes in using procedural textures. One is to provide uber-resolution textures that will work at any/all resolutions. The second is to save memory for assets that can’t be procedurally generated.

just not unnecessary asset creation and re-creation 60 times a second for millions of pixels

That’s what they do in renderman and mental ray, so it’s not entirely unheard of. You’ve got remember that any texture asset is going to use a fair amount of memory. Usage of noise and turbulence functions can help to add detail or noise to a texture, or even generate a complete texture on the fly if you are limited with the amount of memory available. It might not be something you use all the time, however it does provide other uses that are very useful in vertex and fragment shaders (for example to adding randomness to a particle system… ).

use a texture.
I’ve got 2 gigs in this card - memory’s a non-issue.

use a texture.
I’ve got 2 gigs in this card - memory’s a non-issue.

You have a 2GB video card? That seems unlikely, considering I don’t think that nVidia or ATi makes those.

Even if that’s true, most people don’t. The median is still 128MB. There’s still the better visual quality you get from a texture of virtually infinite resolution compared to a mere 2048x2048. Or even a 4096x4096.

Oh, and let’s not forget the performance benefit. The fewer textures you use, the less on-card bandwidth you’re using. Which can then be used for other things, like vertex transfer, texturing from other textures.

That’s not to say that it’s always a win. But blanketly disregarding an effective tool in visual quality/performance just because you don’t like it/can’t be bothered is rather silly.

oops, forget I mentioned my card.
So what screen resolution are you going to be using to display this 4096^2 (or infinite resolution) texture?

I don’t know if you have heard about it, but there is a thing called “zoom”. It has something to do with viewing something up close :wink:

Using multiple texture detail levels and dynamically swapping solves the memory problem at the cost of bandwidth. Using procedural textures solves the memory and bandwidth problem at the cost of GPU cycles.

In the end, it’s all a tradeoff for visual quality. When you have spare memory and/or bandwidth, you use more or higher resolution textures, when you have spare GPU cycles, you use procedural methods.

what are these spare gpu cycles you speak of?

I can see uses of noise() functions, but knackered has a point. In many cases just using a texture is more than good enough, or even preferable to using noise() functions. If you use noise(), you’ll have to deal with aliasing. Not that it’s impossible, but it’s not free either. Texture lookups are relatively cheap. Mipmapping essentially comes for free, or even speeds things up in addition to solving aliasing problems.

And another problem is that the noise() functions are loosely defined. So the results probably won’t match between different IHVs.

Yes, using a noise texture instead of a noise function is probably better in most cases.

My argument was more general, about procedural methods on the GPU in contrast to pre-generating everything into a texture. I have to admit that I do pre-generate all procedural stuff on the CPU myself, but that’s just because I have loads of spare texture memory available. If I was GPU memory limited, I would have to do it on the GPU, using some noise (either in a texture or from the noise() function).