Can you do phong shading without a vp or fp?

I just learned what phong means . So can you do it without a vp or fp? I wanna learn as much as I can with vannila OpenGL before I jump into vertex or fragment programs. That way I’ll have a better understanding of the whole thing.

BTW, I finally got two lights to render a scene properly with DOT3 bumpmapping. This is why I ask about phong because if I have a very large triangle, shading doesn’t get calculated properly.

OpenGL by default uses goraud shading. The only way you can do phong shading, is by doing per-fragment lighting, with your interpolated vectors correctly normalized in the fragment pipeline.

The simplest form of per-fragment diffuse lighting means you need 2 things.

Interpolated normal across the surface, and interpolated surface to light vector across the surface.

If you setup 2 normalization cubemaps, and set your normals into the texture coordinates of Unit 1, and your surface to light vector into unit 2, then you just need a DOT3 extension, to dot the resultant vectors from the normal cube maps, and then multiply this by your material color/base map in some way.

You’ll need to setup these vectors on the cpu without vertex programming tho…

So you just need
GL_ARB_texture_cubemap, and GL_ARB_texture_env_dot3

I think thats all correct…

Nutty

You also need texture_env_crossbar, or some other such extension (like register combiners) that allows you to access more than just the current texture unit and the previous output, to be able to combine the output of normalize0 and normalize1 in the third stage. As you’re using register combiners, that doesn’t seem like a problem.

Oh, and you need more than two texture environments to be able to do this right; especially if you also want a diffuse modulate in there. Thus, the GeForce2 is out (although the Radeon might be convinved to do it).

Also, unless you write a vertex program (GF3 and up, Radeon 8500 and up) you will have to send in the vectors to interpolate using software transform; GL won’t do this for you (unless you can use TexGen in NORMAL_MAP and REFLECTION_MAP mode for some specific solution).

As soon as you want to involved skinned meshes, just give up on the fixed pipe and learn to write vertex programs – life is too short to do that well in software unless you’re getting paid for it :slight_smile:

I have all the extensions set up for register combiners and cube maps.

Interpolated normal across the surface, and interpolated surface to light vector across the surface.

That would be my normal map and tangent space matrices right?

I just started looking into cubemaps. So for I understand that they’re used for semi-accurate fast reflection simulation. So then what is a normalization cubemap?

Do either of you have a demo using this technique? This is my opinion, and no offense to NVidia, but their code samples are saturated with extra code that obscures the code you’re looking for. I like Nuttys code. Everything you need to know is right there in order in the main loop.

Nutty, I did look at your DOT3 demo. Your sample code pointed me in the right direction. It showed me how you even combined the ambient model, bumpmap, diffuse and specular models. The steps make sense to me now.

jwatte, I only have a GeForce 2 . I actually don’t want to upgrade just yet. I like developing on older hardware so it pushes me to do it efficiently the old way and the new way.

I REALLY want to learn vp and fps. One day . Every time I look at samples using vps and fps it becomes clearer and clearer.

You can’t really do anything tangent space on two texture units, because you need three register combiners just to do a single tangent space transform (it’s three dot3 operations).

However, just doing plain phong doesn’t require tangent space transforms. If Gouraud looks like this:

out1 =
interpolate(
diffuse( v1.N, v1.L ),
diffuse( v2.N, v2.L ),
diffuse( v3.N, v3.L ) ) +
interpolate(
specular(
reflect( v1.N, v1.E ),
v1.L ),
specular(
reflect( v2.N, v2.E),
v2.L ),
specular(
reflect( v3.N, v3.E),
v3.L ) );

then there’s two “vector based” interpolators, one of which is “phong” (I forget which):

out2 =
diffuse(
interpolate( v1.N, v2.N, v3.N ),
interpolate( v1.L, v2.L, v3.L ) ) +
specular(
interpolate(
reflect( v1.N, v1.E ),
reflect( v2.N, v1.E ),
reflect( v3.N, v1.E ) ),
interpolate( v1.L, v2.L, v3.L ) );

out2 =
diffuse(
interpolate( v1.N, v2.N, v3.N ),
interpolate( v1.L, v2.L, v3.L ) ) +
specular(
reflect(
interpolate( v1.N, v2.N, v3.N ),
interpolate( v1.E, v2.E, v3.E ) ),
interpolate( v1.L, v2.L, v3.L ) );

Assuming for a moment that you have an infinitely distant light, then all you need to interpolate is N and reflect( N, E ). Each of those can be calculated in software and sent down as texture coordinates in the range (0,1). These, in turn, look up in a cube map which “normalizes” the value to a unit-length RGB triple in (-128,127) space. You can bind the same cube map on both texture units.

Then your output looks like:

out2 =
dot_clamp( TEX0, L ) +
raise( dot_clamp( TEX1, L ), power ) ;

If you’re using register combiners, and use the final combiner to raise the specular value, you can probably get an exponent of 8 (IIRC). Especially if you cheat on the “power” part and add as much biasing and clamping to add as much linear fall-off as you can :slight_smile:

Hmm, come to think of it, you can probably use NORMAL_MAP and REFLECTION_MAP texgen to actually give you the right texture coordinates, without having to calculate them in software. You’d still need the normalization, but this is a huge improvement in speed :slight_smile:

If you want to get a diffuse color map in there, you need to do it in two passes; one to add in diffuse * colormap, and one to add in specular. At that point, you can raise specular to a higher power using the freed-up combiner, and/or use a gloss map (yay!). GF2 doesn’t allow dependent texture reads, so you can’t use a look-up texture for the specular function, though. (You can do awesome anisotropic things with that :slight_smile:

Also note that I prefer to view specular as expressed through reflection (because that’s what it is) rather than as expressed with half-angle, but they are equivalent (modulo some n2 power of what you get out of the first dot product, I think). You can re-write in terms of half-angle just fine. I forget whether doing so GAINS you power, or LOSES power, compared to reflection.

Here’s a question, the answer to which I’m too busy to look up right now: which of out2 and out3 is “Phong” shading? And then, what’s the other one called? Or are they actually interchangeable?

Oh, and you need more than two texture environments to be able to do this right; especially if you also want a diffuse modulate in there. Thus, the GeForce2 is out (although the Radeon might be convinved to do it).

Why can’t he just multi-pass the base map into the lighting values stored in the frame buffer?

Obviously specular needs to be done afterwards, so it equates to 3 passes for diffuse, then base, then specular.

Is that sample in Cg, jwatte? I wasn’t aware that you could interpolate anything other than rgb values . That method would seem to be slower than using a normal map…or am I missundertanding the program?

I still want to do this in software…and the cubmap normalization still makes no sense to me.

I still feel like I’m jumping to big of gap to get where I want to go.

Here’s a list of things that I don’t know to give you an idea as to where to start teaching me:

  1. Vertex Programs of any language
  2. Fragment Programs of any language
  3. Register Combiners
  4. Generating specular highlights without washing out the whole texture
  5. I know how to set up a cubemap but have not used one yet
  6. Shadow volumes
  7. Stencil
  8. Edges? or Silloettes?
  9. BSP trees
  10. Beam trees
  11. Something about occlusion

Things I do know:

  1. Half Vector
  2. Light point and how it calculates color intensity
  3. Light point and view point and how to generate a half vector to use for specularity
  4. What modulation means
  5. How glBlend works(thx Nutty )
  6. Multipass rendering
  7. Multitexturing
  8. Starting to get the hang of glTexEnv
  9. How to use glTexGen

Learning this stuff out of order is really hard for me. Come to my house and teach me so I can tell you what I know and what I don’t . You might be surprised at what I don’t know…I know I surprised Nutty once .

[This message has been edited by WhatEver (edited 11-16-2002).]

Get yerself to opengl.nutty.org !
The simplest vertex programming demos on the web I guarantee! :stuck_out_tongue:

Nutty

I have looked into some of your demos…but they frighten me . They’re still so beyond me.

Actually, when I’m ready, I’ll be reading the whole section on the topic of shaders. I mostly learn by reading others questions and responces…I rarely ask those questions myself. To be honest, I check your site on a daily basis, Nutty, I just don’t post much myself yet. I don’t want to expose how dense I am yet…er, or have I already done that here .

Can you do phong shading without a vp or fp?

For rigid objects, yes. If your polygons strech, skew, and otherwise alter the normals, then no. You can do this by using bump mapping and a pre-computing the per-pixel (more like per-texel) normal interpolation across each polygon and storing them in a texture. However, you also need to decide how detailed (size) your new texture will be.

JONSKI, I alter my mesh in software and only the joints that are altered are updated with the vertex, tangent space and normal info. So by the time OpenGL recieves the mesh it is a rigid body.

By rigid body I mean something like a teapot or sphere. Idealy, it has round edges, but no movement or deformation. If any of the vertices move, then it’s normal and pre-computed normal map also need to be recomputed. In this case, you might as well use the vp and fp in your graphics hardware, because it handles the job more efficiently then the CPU and main memory can. The advantage of pre-computing a smooth normal map is that you don’t need to do it every frame, so it saves you some graphics processing at the cost of a little texture memory and accuracy. There is a trade-off between accuracy and texture size, so if your rigid object is large and you need to get close to it, then to preserve accuracy you should make the smooth normal map larger than when you would if the object were small and farther away.

Now, this is just a THEORY of mine. Someone may very well chime in and say, “This is not possible because…” I plan on putting my code where my mouth is, but right now I’m swamped with work. Does anyone else think this method of mormal smoothing is possible?

-Jon

[This message has been edited by JONSKI (edited 11-17-2002).]

Is that sample in Cg, jwatte?

No, it’s pseudo-code, describing the various approaches to rasterizing a triangle. The interpolation actually happens in hardware, because the hardware will interpolate texture coordinates and RGB values between vertices.

The idea behind sending in a vector as a texture coordinate (3D) is that you can use that texture coordinate as look-up into a normalization cube map, which then gives you an RGB triple out which is suitable for register combiners, or whatever. Also, if your interpolation source needs are suitable trivial, you can make glTexGen() do the initial coordinate generation for you. The trick is just to ignore the fact that the spec calls them “texture coordinates” and just view them as numbers that get interpolated for you :slight_smile:

Towards the end, I suggest some set-ups that may use fixed-function hardware to get this going; assuming you can fully utilize the register combiners, it seems that you could do this in two passes on a GF2: diffusebase; speculargloss;
Might be hard to get the ambient in there in front of the base modulate, though…

Thanks for all your help guys. I’m delving into Cg so I can get a feel for how OpenGL processes vertex and fragment data. Maybe then I will understand register combiners. For now, it’s still to advanced for me :/.