Cube Map EMBM

Won,

Actually the “Blinn Bump Mapping” demo is a port to DX8 of an OpenGL demo. We’ll have that stuff on the web site soon.

I have no idea why the DX8 guys are calling it “Blinn Bump Mapping” - in the OpenGL group we call it “True Reflective Bump Mapping”.

Dave –

Why would I care about “theoretical capabilities” of the APIs? I care about how effectively the API exposes what is available in hardware. What’s wrong with this? For example, compare the DX8 pixel shaders with 1050’s texture shaders (yes, this driver will be available when the hardware comes out, when it is actually useful to me). AFAIK, texture shaders aren’t a “complete” implementation of DX8 pixel shaders, but they also implement some features not available in pixel shaders. However, texture shaders will be fully accelerated but pixel shaders might not.

Great. Ray-traced radiosity. Uh huh. Is there something about Starcraft3D you’d like to tell us?

Cass – I think it’s called Blinn Bump Mapping (for DX8) because the guy who invented “true” bump mapping is named Blinn. And he works for Microsoft now

Anyways…While you’re putting the true bumpmapping demo up for OGL, how about the elevation map demo, too? That seems like a cool one to play with.

Actually, texture shaders should be a superset of the DX8 pixel shader texture lookup functionality.

  • Matt

i havent to do anything with starcraft3d anymore, so i cutted the raytraced radiosity out and changed it to simple vertexlighened dx8

no, but in fact, somewhen we have to change to raytracing, cause there is no REAL possibility to do reflections, refractions, volumetric objects, transparence etc… (ok, the gf3 looks like she can it anyways but she cant (sorry, it) )

i dont like dx8 in some ways… youre not sure there what is hardware and what software, and in the pixelshader it switches from driver to driver … the texture_shader is made by the gf3 itselfs, it is a apiextension made out of the gf3 architecture… the pixelshader of dx8 is made by microsoft, and now nvidia/ati etc have to “construct” a gpu around… thats somehow not logical… but anyways… have a nice day… i got what i wanted… i stopped the illegal releasing of not finished nvidia drivers (looks like i did )

Won,

I checked on the “Blinn Bump Reflection” name, and the guy who originally used that terminology (a long time ago) said it was in deference to Blinn, but not due to the specific technique. True Reflective Bump Mapping is the term we’ll be using to describe this technique.

I’m all for giving Blinn credit for the amazing things he’s come up with in graphics, but you could argue that most forms of bump and reflection mapping could have some sort of Blinn name.

Thanks -
Cass

Which begs the question, what exactly do you mean by “true” reflective bump mapping? Let me enumerate the real-time techniques that I know of in the way I am familar:

  1. Emboss. Weird hack, barely worth mentioning except that it is a “Least Common Denominator” solution

  2. Dot3 Bump Mapping. Basically, per-pixel lighting with a normal-map. No environment maps.

  3. Bump Environment Mapping. An environment map lookup where the actual lookup is directly dependent (the texture coords are offset) on a tangent space dUdV perturbation map.

  4. Blinn (True?) Bump Mapping. An environment
    map lookup where the actual lookup is indirectly dependent (the normal, which is further used to compute the texture lookup, is perturbed) on a tangent space dUdV perturbation map.

Is this correct? Should I be calling them “perturbation maps?”

-Won

the best reflection we currently can provide is something i always called cubic environment bumped mapping, wich is

calculating the reflectionvector per pixel, based on a per pixel calculated( and normalized ) normal, and use this reflectionvector per pixel as lookup into a cubemap…

done is this in this demo:
http://www.nvidia.com/Marketing/developer/devrel.nsf/pages/18DFDEA7C06BD6738825694B000806A2

there is another technique, wich needs the texture_rectangle i think, and looks like this:
http://www.sharkyextreme.com/inc/d_scree…review/dino.jpg

great, not? but there arent informations open from nvidia how to do this, except you work in a big company , i dont, so i cant help you here…

but, for doing stuff:

per pixel diffuse lighing:
normals per pixel dot lightvector per pixel

per pixel specular lighing:

( reflected eye->point vectors (per pixel ) dot lightvector per pixel )^factor

( this power_of is not supported with register combiners, so you have to do simple dotproduct * dotproduct ( == ^2 ), or some approximitation with a linear thing for example… )

per pixel reflection:

per pixel reflected vector ( same as in specular lighing ) as coordinates/lookup for cubemap to reflect the cubemap…

real per pixel reflection:
not possible without raytracing or rendering whole scene with a cam( dir = reflected vector, pos = point_pos )… so use raytracing, its faster there

newtype per pixel reflection( like dino image):
rendering the scene plane reflected by the see, copy it onto texture_rectangle, render the see with this texture_rectangle as texture, and the screeencoordinates of the flat water at every vertex ( wich isnt at the same place now ), or do some other displacement… there i currently think wich could be used in t his… but i think something like this…

thats what i know currently…

Won,

The True Reflective Bump Mapping would have been called EMBM, except that name was already taken by DX6. To avoid confusion, we don’t use that name - but in GeForce3 OpenGL, we call that “offset texturing”.

For True Reflective Bump Mapping, we compute a reflection vector per-pixel based on the per-pixel normal (from the normal map) and the eye vector (interpolated per-vertex). That reflection vector is then looked up in a cubemap.

As davepermen points out, you can’t get self-reflection on an object, but you can get object inter-reflection if you compute cubemaps for each object.

There is also another new way to do bump mapping on GeForce3 that is described in the technical documentation that should be up soon.

Thanks -
Cass

even compute cubemaps for every object is not correct reflection… (ok, its very near, and when the objects are not really coliding all the time, means espencially when you do a space game where the objects are really far away, its precious enough…)

i cant wait for technical description of the new technique, wich is used at the dinoimage, i suppose?

The dino reflected in the water uses offset texturing. The water ripples are from a DSDT texture perturbing a reflection image that is computed by a dynamic copy-to-texture pass of the reflected view.

So in that dinosaur educational title from Crytek, the reflecting ponds are actually done somewhat like the standard reflect scene over plane and render technique? The reflected scene is just drawn into a texture and then perturbed according to a normal map texture?

Sounds like an interesting technique. But wouldn’t it basically be limited to planar geometry, unlike cube mapping?


Speaking of cube mapping and inter-object reflection, I had a thought as to how to do relatively realistic multi-bounce reflections with minimal processing power.

The traditional way to do multi-bounce reflection with cube maps is to recursively calculate the cube maps for each object. The cube maps calculated for every object are then used to calculate the cube maps for every other object, to get multiple bounces in reflections. Every time the cube maps are recalculated, the reflections go one bounce deeper. This is an exponential algorithm, so it is very slow.

Instead of doing this, why not only calculate one level. Except, when calculating the cube map for an object, render the other objects into the cube map using their cube maps from the last frame of animation. That way, over multiple frames of animation, the reflections would get deeper and deeper.

There would be a slight time lag, but not very much. Assuming 60 frames per second, the cube maps would be about 6 reflections deep in 1 tenth of a second, and nobody is going to notice the accuracy of 6 bounce reflections.

Anyway, what do you think?

j

j,

Yes, the “distorted reflective plane” approach pretty much only works well for flat or nearly flat surfaces. This is one thing that you didn’t hear much about when people were hyping DX6 EMBM, but once you start trying to do the math with only DSDT dependent texture and a single per-object 2x2 matrix, it becomes painfully clear.
Still, this approach is the easiest way to capture the reflection of objects that are near or penetrating the water’s surface. Cube maps are not so useful in this regard.

Regarding your idea on multiple object inter-reflection, I bet that would work quite well. Of course, self-reflection is a whole nother story.

Thanks -
Cass

Originally posted by davepermen:
[b]even compute cubemaps for every object is not correct reflection… (ok, its very near, and when the objects are not really coliding all the time, means espencially when you do a space game where the objects are really far away, its precious enough…)

i cant wait for technical description of the new technique, wich is used at the dinoimage, i suppose?[/b]

So what if it’s not correct, if it looks good what does it matter? Renderman doesn’t use raytracing and it can generate very realistic scenes.

its not correct, thats all i said… when nvidia guys say it IS correct, then they are wrong, and i think this should be said…

second, doing it really correct is sometimes faster that doing something nearly correct… i mean, try to get a modern geforce3 demo running at 320x240 on a cpu in software… fully optimiced, sure try it with raytracing… you can get a correct scene, per pixel… and its faster on cpu’s ( i know software raytracers wich run at 320x240 even with plain c-code… no mmx/3dnow or what ever assembler build in… and they run with around 25fps… )

i like the nearly perfect cubemap, yes… but i would prefer a correct one if i could…

but i think it is currently a much too big step to put out the rastericer and putting the vertex_program and texture_shader together so we play with wrong texnology foe the next 10 years, and no one has real problems with it… but then the cpu is fast enough and cheap enought to do it without

two athlons 1gig parallel… one as cpu, one as gpu… thats fast enought and in some years this is possible i think…

but the cubemap itself is the best thing someone created… for per pixel operations its just not really possible without…

second, doing it really correct is sometimes faster that doing something nearly correct… i mean, try to get a modern geforce3 demo running at 320x240 on a cpu in software… fully optimiced, sure try it with raytracing… you can get a correct scene, per pixel… and its faster on cpu’s ( i know software raytracers wich run at 320x240 even with plain c-code… no mmx/3dnow or what ever assembler build in… and they run with around 25fps… )

So you are saying that a real-time raytracing engine in software is faster than a software implementation of the GeForce3 would be?

I think that is possible, but will the raytracer look as good as the GeForce3 emulator? Probably not. Per-Pixel bump mapping effects wouldn’t be available unless you did the calculations for them every time a ray intersected with an object. And I can assure you that this would be MUCH slower than the GeForce3 emulator.

Sort of like comparing apples to oranges. The implementation is completely different.

Not to mention that “real-time raytracers” aren’t exacly correct. Raytracing itself is an approximation of what occurs in real life.

So if you are looking for “correct” simulation of an environment, going outside would be the best implementation around - it runs at infinite FPS with almost infinite detail .

j

This expands the topic a bit, but i know hardware raytracing boards already exist, i saw them at the siggraph.
Anyway, they don’t render opengl and need a specific interface to the 3D software.

yeah sure, every thing exists, even a texture3D only renderer (raycaster in this case)

there are much boards out… but they are most the time not cheap enough ( no, i dont have 2000+ $ currently for such a board ) and so no one has them…

we have to wait for a first “buyable” one, and then we have to get it, that every one buies it, and then we can start developping them… then it rocks really great… but currently we are sitting on stupid rastericing grafic"emulators"… … btw, it goes out of topic? and? whats the problem… the whole time (when i’m in the post), it goes out the topic

but i think in this thread here are nice infos for current texture effects and how good every one is… and thats in fact what should be

i now buy a pda from compaq ( the one with 350mhz )… to start developping things there… no damn api with extensions wich arent supported cause someone dont want to give it free … but no floating point unit, too … now letz see how much i get there to work… then i think gf3 is out to buy somewhen and i can start doing things with the texture shaders…

a question, how good is it using 32bit logarithmic “floats”

means storing the logarithm base 2 instead of the value itselfs… gives big values… and fast for multiplieing ( just add the values )… letz see what i get there for replacing the floatingpoint units in the pda ( fixed point is not good for dotproducts… cause a 32bit value can store up to 4294967296, but for a multiplication it can just be 65000 * 65000 to get 4294967296 wich is the maximum… and then a sign means just -32000 to 32000… and then floating point with quarter1000tel… gives 32000/256 as max… not much… oh, and divide by 3 for a dp3, divide by 4 for a dp4…)

( i know, nothing to do with embm… but anyways… i just talk alot )

btw, the dino image (link some posts above…) showes good the power of embm, i think… i mean, it looks great, not?

but, what idea is behind this image?
http://www.firetoads.com/gf3/feb25/Everglade_Rush_v10_022.jpg

is this just embm, too?

I’m not sure why you are so heavily advocating real-time ray tracing and other ‘real’ ways of doing graphics in real time. What would you need it for? Certainly not video games. Graphics is only secondary in creating games, in my opinion.

Don’t get me wrong, though. Of course we should certainly continue to strive for wonderful graphics advancements, but new stuff will only become available when it makes sense business-wise.

funk.

this thread talks bout cube map embm… in fact we dont need anything but simple colored triangles… but he, did u liked matrix? why, you dont need special effects… read a book and you get the same story…

we dont need even a pc, and no gpu at all… but we have it, and here we talk about how good they are, and thats my opinion… raytracing etc stuff is much correcter than what triangle rastericer systems… if you need it or not is not asked, i just say what is correct and what is faked…

Yes, but it doesn’t matter if you can’t do it in real time.

Graphics is always about approximating reality so that you can get the best quality possible given performance constraints.

If you want “correctness” you write a photon tracer; even a ray tracer doesn’t handle many effects very well. Good luck.

  • Matt