I have a problem with OpenGL’s coordinate transformation (translation, rotation, scaling) I cannot solve myself.

I am maintaining an OpenGL port of an old 3D shooter game (Descent) and have implemented translucent shield spheres for the robots and players. Currently the entire shield lights up when being hit. Now I want it to only light up around the hit point. So I wrote a little shader problem that takes the hit point and uses the distance of each vertex (texel) to the hitpoint to dim the corresponding pixel:

```
varying vec3 vertPos;
void main()
{
gl_TexCoord [0] = gl_MultiTexCoord0;
gl_Position = ftransform();
gl_FrontColor = gl_Color;
vertPos = vec3 (gl_ModelViewMatrix * gl_Vertex);
}
uniform sampler2D sphereTex;
uniform vec3 vHit;
uniform float fMaxDist;
varying vec3 vertPos;
void main()
{
vec3 scale;
float scale = 1.0 - clamp (length (vertPos - vHit) / fMaxDist, 0.0, 1.0);
gl_FragColor = texture2D (sphereTex, gl_TexCoord [0].xy) * gl_Color * scale;
}
```

Now my program can do the coordinate transformation by software or via OpenGL. When I have the sphere coordinates transformed by software (slow) ahead of rendering (setting gl_ModelViewMatrix to identity) everything works as intended (in that case “vertPos = vec3 (gl_Vertex)” - no multiplication with gl_ModelViewMatrix).

When I have OpenGL transform the coordinates (fast) and render the shield sphere, the sphere looks alright, is at the proper place, has the proper orientation (it is textured, so I can see that), but the vertex - hit point comparison doesn’t work. I have to add that I transform the hit point by software ahead of using it in the shader to avoid having the shader to multiply it with gl_ModelViewMatrix for each texel (but if don’t use the software transformation and multiply with gl_ModelViewMatrix instead, it still doesn’t work).

I have no clue what I am doing wrong, as rendering the sphere using OpenGL transformation works fine - some misconception about gl_ModelViewMatrix? What do I overlook? Can someone please enlighten me?