Impossible to load a model windows 10 64 bits nvidia geforge gtx 1650

Hi

Using OpenGL I am trying to load a 3d model (gltf) generated in blender, sadly I dont see anything on screen despite trying several cameras. But with the same code i am able to visualize another 3d model (the standford bunny)
How do you usually troubleshoot that kind of issues ?
I can easyly have access to matrices ( rotations, translations, scale, model ) and also vertices
Can you help ?

Best regards

And also can you help me understand “real” coordinates I mean the ones you get once you’ve applied cameraMatrix multiplication. Let’s just say I loaded a model and then printed these coordinates, then I moved the camera back and reprinted the new coordinates. I expected x and y to shrink but What happened is x and y werent modified, but only z increased… why is that ? FYI I am new to opengl but not programming.

Best regards

Using name “cameraMatrix” can cause confusion (could be the matrix you use to move a camera-object in the scene). I take you mean the model-view-matrix.
Opengl-coordinatesystem has center x,y,z in the lower left corner, y pointing up, x right and z positive into the screen. So, what you experience seems to conform properly except for the z-direction … depending on how you visualize it. If z increases it’s been moved forward, into the screen.

Hi

Thanks for the answer! the camera matrix i am talking about is equal to the view multiplied by projection. Actually the idea behind the reasoning is let’s perform the same calculations as the vertex shader and see what i get to try to understand why the final product doesnt appear. So I did view * projection * translation * (-rotation) * scale * matrixModel (copied and paste exactly code from vertex shader), expecting to map that to the flat 2d screen coordinates. If I get it right that multiplication gives us the Opengl-coordinatesystem you just described?

Best regards

In the mean time I’ve seen this explanationmat
Probably I am not multiplying with the projection matrix, or it’s the identity matrix or something like that I’ll look into that. I am new to opengl by the way, i am still trying to figure things out :wink:

best regards

So I have computed the normalize coordinates (division by w to get screen coordinates) and found out x was out of range ( ~ -2.5 ) so i modified the camera and now x is in range but still nothing appears so i guess next step is to compute the output of the fragment shader unless somebody has a better idea.

Best regards

hi mz,
The order is not what I would expect ( projection * view * model ). Making the view-matrix is a bit of a hassle as there are two ways of approaching the setup. Either you work directly in view-coordinates(?) or you work in world coordinates … I cann’t say that I ‘get it’, but I’ve got my setup working in world coordinates. This is where the camera-matrix has position and direction in world-coordinates and the view_model_matrix ‘uses’ this matrix (it’s position & direction) for input.
The standard answer to you is spending time on ( googl “learn opengl”, there are very worthy tutorials ). Find awareness on new vs old opengl, the profile and version of opengl you use.
I have not really managed to make my setup conform to what I would suspect from resources and I’ll leave out my account on it. I use the glm math library for the matrix stuff.
If I recall right I build a dynamic camera in world/screen coordinates. That way I could look around for the model if it did not present itself where I would expect …
Maybe you can change a single parameter to test if the model is behind the camera?

The projection should always be the left-most transformation.

Yes you(s) are right. It’s a copy and paste mishapp, it’s correct in the code. For the moment I am trying to make sense of the finish product (the result of the multiplication of all the matrices (what the vertex shader should output)) by feeding them to openCV that i am a little more familiar with and see what happens. To remind the initial problem, with some code i retrieved from a very good tutorial i am able to load the standford bunny but not another gltf generated by blender, despite having moved the camera and looked everywhere. To get the fragment shader out of the equation i hard coded it to always output red and still nothing despite that. but with that setting the bunny does appear in red. I’ll tell you how it goes.

Best regards

So if i hard code red in the fragment shader i get this:bunny

And then if i compute the coordinates (proj * view * … ) and plot them in opencv, after some mandatory remapping of the coordinates(min->0, max->width, min->0, max->height i get this:

. So there is the bunny but I was sort of expecting to see it from the same angle. FYI it’s upside down, the bit sticking down is one of the ears.

Best regards

At last I think I am getting somewhere, if i apply the same process to the model I cant load I get this:


Probably as if it couldnt be properly joined together because of some missing vertices in the middle.
It’s supposed to be a body shape actually. So I’ll check the vertices.

Best regards

Ha! I didnt pay attention the first time but for some reason i just tried to open the same model in windows using a simple click and this time i noticed this error:

It turns out the parsing code from the tutorial is way too simplistic and misses most of the bits, so after i beef it up it should be ok.
So I guess that’s it, thanks everyone.

Best regards!

Hi

Just to update I realized the 3d loader was only handling the first primitive of the mesh, which is why i was missing most of the vertices. So after i changed that, the code still needs a few modifications here and there but if with OpenCV as showned before i do pretty much the same work as the vertex shader, computing what it should compute I get this:

which confirms the right path it should work in OpenGL. For those familiar with it’s a model generated using the mb-lab plugin in blender. That said I dont know if OpenGL has some kind of debug mode where it prints stuff instead of being silent.

Best regards

There is debug output. That notifies you of errors without having to instrument your code with glGetError calls, and can also report issues with portability, performance, deprecated features, and undefined behaviour.

It won’t (and can’t) tell you about simple mistakes like not loading all of the data, pointing the camera the wrong way, or other “that may have been what I said but it isn’t what I meant” issues, though.

Ok I see. I’ll stick with this simple debugging method for the moment. It allows me to know what each vertex is doing.I am still an OpenGL rookie, I started 3 weeks ago ;-). So after a few modifications I finally see something: model

Best regards.

I had an idea what’s was going on, Seing this

confirmed. At some point I had ~ 100 000 vertices which is the whole thing and know I just have 20 000 i literally have no idea how that’s possible, looking into it.

Best regards

Actually I made 2 mistakes: thinking I was doing (min->0, max->width, min->0, max->height) what I was doing was (max->0, min->width, max->0, min->height) which explains why the bunny in OpenCv is upside down relative to the bunny in OpenGL, than for the 3d model I try to load, some way too quick code modifications meant I was looping on vertex size instead of index size which explains vertex loss. With these two modifications what I see on screeen is still not what I expect but is much more understandable or less (un)understandable :wink: depends how you see it. So at this point I have this:og
And the exact plot of the coords is (in the next message)

this:


I am certainly very close but I reckon I am starting to run out of ideas, because everything fed to OpenGL seems correct. I tried glEnable(GL_DEBUG_OUTPUT) but GL_DEBUG_OUTPUT doesnt seem to exist.
It’s not a fragment shader issue because if I modify it to systematically output red for each vertex, the weird shape is still there. I guess I need to load vertices a packet at a time (for each primitive) and not all in one go or else they wont be joined properly. I’ll try that.

Best regards!

Finally! after that last modificationok
It confirms the data for each primitive has to be loaded separately, because I was appending all the data in one vector. Thanks!

Best regards