Cubemap distortion

Hi Everyone,

I’m implementing a simple sybox system. The idea is to allow the user to specify an image-skybox or a procedural one (define an equator and poles colors).

I thought it could be as simple as drawing a cube centered at the origin in camera space, oriented according to the camera orientation in world space, using world space coordinates to sample a cubemap/interpolate between poles and equator colors.

I thought there was no need to have any kind of perspective so I am providing camera space coordinates to the gpu and a matrix for the rotation.

The results I get are clearly not ideal, the skybox looks a lot boxy while ideally I wanted to have something more like a dome.

This is a visualization of the interpolated y coordinate (sin(y*t))

Same with a cubemap.

Is there something wrong with the method I’m using itself? Is there a way to sample a cubemap by drawing a cube and somehow achieve the illusion of a dome?

I’m not sure why you’d think that.

Forget rasterization and think ray tracing. You’re tracing rays from the eyepoint through the center of each screen pixel (on the near clip plane) into the environment. What does each ray “see” in the environment?

These rays are not parallel (ortho), they’re distributed radially, rooted at the eyepoint (perspective).

If your cubemap rendering method (generation and application) doesn’t show the same position in the environment that the ray tracing method would observe, then the former is incorrect.

1 Like

Cubemaps in OpenGL and Vulkan have their own coordinate system, rarely mentioned in samples and tutorials I’ve seen around the web, that differs from the world-space coordinates of both APIs. See Appendix A of the KTX specification for a description.

1 Like