Not sure if its tag is basic or advanced but I am wondering if I am missing a fundamental part of cubemapping OR if I have something subtle wrong in code.
First…I have cube mapping working and all looks great. Actually really easy to implement. Results look great to the average environment map.
Here is my crux of my confusion OR I have something subtly wrong. Cube maps results are mirror image on the final render. Meaning if I have a negative Z image with the text ‘negz’, the output (cannot type it but) is ‘zgen’ and the letters backwards. NegX is on my left, posX on my right, etc, the textures are in the correct places. It appears as if we are looking out from the origin from the center of the cube through the plane that has the image. This also seems PERFECTLY logical, of course it works that way. And, if this is how its supposed to work then cool, all is well. Using maps of outdoory scenes or space, or anything with no ‘directionality’ it looks great. OK…so what if I have a cube map of an interior (and just making this up) of a bar for example with a sign on it with writing or address numbers, etc. It would all be mirrored. Additionally things that are supposed to be on the left are on the right, etc.
So…is this just how it works? Do I correct for this in the images themselves, reverse everything horizontally and get things lined up by putting the correct image on the correct plane mirrored?
I would guess that there is really no ‘one way to do this’. If it lines up for the intended purpose then all is well right? This is more of a ‘is this really how its supposed to be?’ type thing. Do folks just compensate in their base images OR so I have something subtly wrong somewhere? A projection mix-up, etc? If I negate the X in the vertex shader, things are readable BUT +X and -X textures swap. Again, no big deal, just swap the textures. I’m missing something, got something wrong or just hitting a bump in the way cube mapping work and that’s just how it is.
Question…no one have any thoughts? Is it not clear what I am asking? Maybe no one knows? I typically ask when I am at an impasse or need a deep retrospective. I’ve asked one other question in this forum to the same silence. I know I am asking maybe hard/obscure stuff but I figured there is someone with the insight I am lacking. I hope not to get banned for this response…just trying to gauge how I can use this forum to help. I’ve got one more in the pipeline but not sure if I should ask.
The orientation of cube map faces isn’t particularly logical, but you’re stuck with it (FWIW, it’s copied from RenderMan).
Note that the faces have the correct “sense” for a right-handed coordinate system; they’re rotated but not flipped. They might seem flipped if you’re using an image format where the first row of image data is the top row; OpenGL conventionally has positive Y going up, not down.
You probably want to construct the image files so that you can pass a pointer to the loaded data to glTexImage2D without needing to rearrange the data. The coordinate system should be whatever is convenient for the application (i.e. you can have Z vertical with X-Y horizontal or Y vertical with X-Z horizontal); whatever lets you use spatial vectors as 3D texture coordinates without needing additional transformations.
Thanks…yeah read that before. I expected this was the answer…I’ll go with cube map is projected on the outside of the cube…viewer is inside, adjust images as needed.
My other question I was going to post I just happen to find at the below post. Something to do with discard operation, helper functions and calls to dFdx / dFdy leaving things in odd states. Moved the discard conditional down a few instructions in the suspect shader, after the derivative calls, and artifacts from discard are gone. Another…really odd (lower level understanding) problem I seem to run across. Just wanted to post here in case anyone is seeing artifacts on edges of an alpha transition and using a discard operation. Totally off topic if the original post BUT another really odd behavior.