Just a curiosity question about rasturization

Why are we drawing to a 0,0 to 1,1 square exactly instead of a point?

Seems to me, if we were collapsing 3D coords to a point we could overlay any type of shape over the point, or intercept 3d data drawn to that point with any sort of plane, ect and use that to record data in a more flexible format.

No problems with the language or anything, but a curiosity that cropped up while dealing with rendering to buffers. The square buffer is very confining when we could be mapping to all sorts of fun objects like spheres, how cool would that be for recording light data ect? (yes I realize it’s easy to map sphere data to a rectangle, have done it, just seems like it would be more mailable with a flexible raster stage)

Would limit the need for a perspective calculations from a 3d environment as well, and allow us to dictate where data is mapped in any sort of buffer in a more reliable fashion.

Is this because, ultimately, we’re rendering to a screen?

Why are we drawing to a 0,0 to 1,1 square exactly instead of a point?

When are “we” doing this, exactly? I don’t recall the part of the rendering pipeline that draws to squares. Are you talking about GL_POINTS rendering?

This sounds like a response to some other thread or discussion, but you haven’t provided any context for this discussion.

Well, the whole point of the vertex shader is to move vertex coords between the range values of 0.0 and 1.0 for the raster stage correct? Into clip space coords?

Clip-space is [-W,W], not [0,1]. It is a 4-dimensional homogeneous coordinate system. Even NDC space, after the division by the clip-space W, is [-1,1].

So what exactly are you talking about?

Sorry my error, default clip-space W is -1 to 1 for both the x and y axis.

Here’s what I was looking at. I was trying to transform 3d coords into polar coords layed over a 180 degrees view. (basic formula x = cos(arctan(y/x)) / z
formula y = sin(arctan(y/x)) / z
which sort of works until you factor in that default clipping clips across a square instead of a circle. (with some jiggering you can still get it to clip correctly) I was trying to map 360 degrees as well, but that has huge clipping problems when you wrap over the 360 degree value in relation to the clip space coords.

The basic result is a fish eye lens rendering of 180 degrees of a screen into a circle overlaid onto a 2d texture. I was trying to figure out if it was possible to render onto a 1D texture instead with the incremental value representing different areas on a sphere. (for instance coords 0-4 could be the pixel values 45,135,225, and 350 degrees around the focal point x radians far from the focal point)

I’m using a geosphere for my model, and just sort of figured, that with everything 3d, i found it odd that we were always collapsing things to a w,w rectangle instead of to a point with some type of data reference of a model overlaid over it. If you look over light models, you see a lot of use of cube texture maps ect because of this limitation, instead of say, a geodesic. (which is still just x degrees of view overlayed onto a 2d data model)

Cube maps are a good example where the lookup value is a vector instead of a 2d coord. However the actual model is a cube because you can render each face easily to a 2d plane. Wouldn’t it be nice if you could just take a 3d scene and map it to a geodesic sphere instead of a plane? You wouldn’t even need clipping.

Then there’s the additional problem that the line 1,1 to 1,-1 would be represented as a curve across a polar coord system, and the way primitive rendering works it’d just draw a straight line from .707,.707 to .707, -.707. Was playing around with some math and noted that you could get the same result of the current rendering method of a square clip space of w,-w by overlaying a rectangle for data reference ( where a point is x,y,z) at a point between the 3d coords of objects and the point of reference (0,0,0)

formula x = cos(arctan(y/x)) / z
formula y = sin(arctan(y/x)) / z

I’m not 100% sure, but I’m fairly certain that this transform does not result in lines remaining… lines. Even homogenous clip-space still has objects that are linear remaining linear.

If your destination space has triangles who’s interior angles don’t sum to 180 degrees, then we’re talking about a bent triangle. But it’s not possible to scan-convert a bent triangle (not without tessellation, and even that’s just rendering smaller triangles), so I’m not sure how this would work.

I’m using a geosphere for my model, and just sort of figured, that with everything 3d, i found it odd that we were always collapsing things to a w,w rectangle instead of to a point with some type of data reference of a model overlaid over it.

At its core, OpenGL is a triangle rasterizer. That’s all. It’s a very, very fancy one, but it’s still just drawing triangles. These triangles are provided in homogenous coordinates, and they are transformed from a 4D coordinate system back into a linear 3D one by the internal hardware.

Furthermore, even if they could, you’d still need to specify a transform from geodesic space back to a space appropriate for doing depth comparisons, lighting (light direction vectors are no longer straight in geodesic space), and for producing an image. So I don’t see how this would help you.

If you look over light models, you see a lot of use of cube texture maps ect because of this limitation, instead of say, a geodesic.

What light models are you talking about?

Furthermore, how would this change anything? In general, cube maps used in lighting are used to specify the light coming from a specific direction. Cubemaps are used because they are far better than 2D textures for sampling directional values. They have no degeneracies the way all sphere-to-2D mappings do.

Whether you’re rendering a Euclidean triangle, or a triangle-on-a-sphere, you still need to answer the question, “how much light comes from direction X?” And no amount of geodesic rendering is going to make using a 2D texture a good choice for answering that question.

Browsing through the thread, I have the impressing that Red_Riot should just render to a cube map (or a hemicube).
Then it depends on how the result is expected to be handled later.

Yeah Zbuffer, that’s sort of the conclusion that I’m coming to. Working on dynamic lighting and I’ve been trying to draw up models that work with the OpenGL coordinate system.

Ideally I’d like a model that represents equal triangles that compose a geodesic sphere. This would let me store data in one buffer, and do one render of every object within the degrees of my specification. and use a simple vector reference to any of that data on any part of the sphere.

Idealism and practicality are two different things however. Working with various models it always comes down to how clipping works and the w,-w clip space restraint. I’ve been knocking out options and perspective calculations and I’m pretty much down to a cube map as the only viable option for 360 degree dynamic lighting. I’ve played around with models ranging from 180-360 degrees of recording view data and mapping it to a 2d Buffer and there’s always a critical catch.

Still looking different options for wide angle dynamic lighting.

[quote=Alfonse Reinheart]

At its core, OpenGL is a triangle rasterizer. That’s all. It’s a very, very fancy one, but it’s still just drawing triangles. These triangles are provided in homogenous coordinates, and they are transformed from a 4D coordinate system back into a linear 3D one by the internal hardware.

When you reference 4d coordinates you’re talking about Quaternion mathematics yes? axis angle rotation.

No quaternions here, just homogeneous coordinates x,y,z,w for a 3D world need 4 components.

http://en.wikipedia.org/wiki/Homogeneous_coordinates

Ah, that makes sense, thus the 4 coords for clip space, and 3 (obviously) in the fragment shader…

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.