I have a textured sphere that is centered at the middle of the viewport. It’s always centered at the middle of the viewport but can be rotated and I can zoom in by changing the fov. To the side of that, I have the same texture that I put on a sphere painted on a flat (2d) quad. What I am trying to figure out is no matter what rotation or zoom level I am on the sphere, how can I highlight, on the 2d quad, what part of the texture is viewable on the sphere.
Sometimes the whole sphere is shown and sometimes not. The middle parts of sphere aren’t too bad but the poles are what’s giving me troubles.
This may be a little confusing so I attached a picture of what I’m trying to do. Let’s just say the red box on the sphere is the viewport, so only part of the sphere is being seen while the rect on the left is the texture and the red box on this rect is the highlight of what is viewable in the sphere’s viewport
Thanks for the reply but I’m not really having trouble texturing the sphere. What I’m looking for is how to determine what part of the texture is viewable depending on the rotation and zoom level of the sphere
Why using viewport transformation (cf. “zoom in by changing the fov”) instead to use model transformations like glRotate, glTranslate or glScale ???
Are you sure that the projection of the viewable part of the texture on the sphere is a rectangle ?
(never tested/verified, but I think that is more an ovaloïd shape than a rectangle, but perhaps that is effectively a rectangle)
I use the fov to zoom because if I used glTranslate or glScale and I went in too close I ended up going inside the sphere which I didn’t want.
The viewable part of the texture is definitely not a rectangle I just threw that image together quickly in mspaint. I’d just like to highlight a general area of what is viewable so a rectangle would do in regions that don’t include either pole.
If you don’t want to going into the sphere, you can handle a constraint for that the viewpoint can never go too near the sphere
Perhaps that using/mixing the inverses of texture, modelview, projection, viewport and viewing matrices can help ?
Or store fragments (u,v) values into a pbuffer and read this pbuffer after but to find the “min/max (u,v) values” (cf. the 4 corners of the part of the texture that is used) ?
Better, directly compare/store min/max textures coordinates (u0,v0)/(u1,v1) with the current fragment shader (u,v) coordinates values via uniforms
and convert them at the end to the (x,y) position and (width,height) size of the rectangle using
That was my strategy at first if I’m reading your post correctly. The poles, though, present problems with this technique. Also presents a problem where the left and the right ends of the texture meet on the sphere.