I am kinda wondering about this how it works in OGL.
I am creating a terrain, and for every step in my height map, I scale the X,Y,Z vertex when I draw it by 1024 in the X and Z, and 256 in the Y . Yet it seems like I get the same results when I scale X,Z by 16 and Y by 4. Shouldn’t the big polys be huge on the screen? It seems like OGL automatically scales everything to the same size so I can see it. Either that or I am doing something wrong? Is there a better way to scale the triangles? I am doing it in triangle strips, I run through a loop in my height map and for each vertex, multiply it by the scale and then
OpenGL does not change your vertices in any way, except from what you have specified in your matrix setup, so you are doing something wrong I guess, if you could post a code snippet of the part where it goes wrong it would be easier to help you out.
I’m not definite on this but I think the reason for this is that OpenGL uses the higher and lower values as a guideline on how it looks on the screen.
You initially mentioned 1024 and 256. 1024 = 256 * 4. Then you mentioned 16 and 4. 16 = 4 * 4. In both cases the sizes are of the same relativity even though the values are so different.
From what I have seen of OpenGL when using perspective (3D) view the size is not necessarily constant. My wording may not be exact so an example may explain better.
EG. If you had say a single cube drawn in a scene and had its borders set at say -1 to +1 for both height, width and depth the visual would show it take up the whole scene. Now if you only adjust the program so that the size is now -5 to 5 instead of -1 to 1 the visual effect will be the same. Only changing how the object is displayed (I think) will effect the visual effect. This I am getting from memory so best to try this to be sure.
If, instead of changing 1024/256 to 16/4 you changed 1024/256 to 256/4 you would get a different effect because now your ratio is not 4:1 but 16:1.
Hope that helps