Shading in OpenGL

i am also having problems with shaders
they make my quake3 level look like mario cart
wthhhhhhhh

If that was a request for help, I’ll comment that I’m nowhere near qualified to answer questions on shaders, especially since I’m having problems of my own.

Nonetheless, did you already create a topic for your problem?

hahaha we suck lol
im jus jumping around my level
i have some bumpy collision too lol
gona work on importing a model for now

I apologize. I completely missed your response! True, I should know how it works considering I’m attempting to implement them (I’ll admit, I was trying to avoid that).

The varying variables ecPosition and ScreenPos are actually used in a different fragment shader. It isn’t used in the current one though, so it may be safely removed from this vertex shader.

I am not familiar with the refract function call, is there a GLSL API to reference? I have read on another forum that the implementation of the refract function is not well supported on ATI cards. Since I have an ATI card, it’s a good opportunity to find out.

Edit: I found the manual. I’ll try rewriting the code in my own terms.

Edit:

Vertex shader:


varying vec3 Pos;
varying vec3 N;

void main() {

   Pos = vec3(gl_Vertex);
   N = normalize(gl_NormalMatrix * gl_Normal);
   gl_TexCoord[0] = 0.6 * vec4(gl_Normal,1);
   gl_Position = ftransform();

}

Fragment shader:


varying vec3 Pos;
varying vec3 N;

uniform sampler2D Texture;
uniform samplerCube Environment;
uniform float refraction_index;
uniform vec3 camera;

void main() {

   vec3 nView = normalize(camera - Pos);
   vec3 nN = 0.5 * (N + 2.0*(texture2D(Texture, gl_TexCoord[0].xy).rgb - 0.5));
   vec3 refract_ray = refract(nView, nN, refraction_index);
   gl_FragColor = textureCube(Environment, refract_ray);
}

Result?

That can’t be right.

Oh, I see.

The incident vector is towards the object thus position - camera. Secondly, the refract function takes in eta: the ratio between the outside and inside index of refraction.

Works~

Now I just have to figure out:

  1. Why doesn’t the modelview matrix x vertex produce view vector
  2. Why returning to fixed pipeline does not allow for texture mapping regularly.
  1. Why doesn’t the modelview matrix x vertex produce view vector

It does. However:

vec3 nView = normalize(camera - Pos);

The “camera” and “Pos” are in two different spaces. Pos is derived directly from gl_Vertex, which is in model space (the space of positions before multiplying them with the modelview matrix). “camera” is in whatever space you put it in. I’m guessing that this is not model space.

Also, what’s with the multiplication of the normal by 0.6? What purpose does that serve?

Ironically, it works out because there were no transformations on the sphere. But, as you said, it isn’t correct because as soon as I move the sphere (or apply any transformations), the image is still the same.

I believed that one of the dependencies for the view vector is the camera position. In that case, as I rotate around the sphere, the image should change. But, it did not which led me to suspect that I was setting something incorrectly. As I could not figure out the “proper” set up, I elected to take a short cut. Was I wrong?

No idea. Took it out, and it’s still the same.