> I’ve been dealing with normalised co-ordinates from the get go, either way the math wasn’t what I needed help with, you’ve helped me understand a bit better how to initialise things I need to test my math and as far as matrices go I’ll give them a try AFTER I get my math working since I’m certain it can be translated to GPU code, for example the calculation of the view cone
If you abandon the ordinary opengl consepts, noone has a clue of what “the view cone” is.
The one I can imagine is emanating from my eyes along a direction to a near plane perpendicular to the direction … and continues to the parallel far plane. A 3d thing that needs a coordinatesystem with understandable coordinates to be expressed in.
It’s usual to refer to “the world” as the dimensioned space you intend to act and draw (models) in. The world is your playground where you’r free to fantasize, but sooner or later you’ll need to fix your view-cone in this space. Do you think that you are free to ‘fantasize’ in ndc-space? How will you describe that world to me, in an understandable way, if not in reference to a normal coordinatesystem? It will be difficult for you to figure out simple geometry to draw as model if you have to express them in ndc.
> can be done in software mode
You mean on the cpu side, with gpu-code as the specific opengl-code?
> (cheap operation I expect given there needs to be code handling the “player” anyways) then the values can be passed onto the shaders, the shaders then use those variables to decide if it’s worth drawing the object in the first place.
This sounds like clipping. Opengl is in some degree automated to deal with this.
If you have screen-values as
int sc_width
int sc_hight
the plain conversion to ndc
vec2 my_coords,
vec2 ndc_coords = vec2(my_coords.x/sc_width , my_coords.y/sc_hight)
if you want your d2 world to have other dimensions than screen-width/height, you factor a scale. This is ideally what the viewport-transformation does (specialists may object, I’m not one). The third .z 3d-coordinate follows an axis perpendicular to your screen, positive toward you, if you want 3d.
Now, you can work in ordinary space and, as the viewport does automatically, you’ll need to normalize before sending coords to the shader.
There is a ‘default’ camera sitting at (0,0,0) looking in -.z direction. This may be why you only manage to see the ‘upper right’ part of your geometry in the lower left part of your screen.
This suggests, that you need to do a new transform.
vec2 ndc_coords = vec2( (my_coords.x/sc_width) + 0.5 , (my_coords.y/sc_hight) + 0.5 ) ?
You draw here on the xy-plane for z=0. You may have to add z = -1.0 to your coords to prevent automatic clipping. This is an orthonormal projection. Layering could be achieved by drawing the top last.
I’m not going to follow you further than here, unless you involve the proper viewport transformation, use rational coordinates and have simple geometry appear on screen where expected.