Hello I dont know if its called occlusion per mesh its just what it came up to my mind
i would like to know how can i occlude or dont render parts of a mesh that are not really being seen the backface culling is helpfull but not enough since i have very high poly models and it seems stupid to pass all that heuge amounts of data every frame!!! so i know this kind of occlusion / culling is possible however i have never seen it in action i mean no examples whatsover i couldnt find anything related (perhaps i’ve search for the wrong thing) but anyway is there any example on this? how can i do it? which would be an efficient idea behind this method / system ??
thank you very much for your help!
Check out the ARB_occlusion_query extension. It might be what you are looking for.
Thanks! however it says that its for multiple-object meshes and my meshes are single object (all a single mesh) they are very detailed and i know that having them in a single mesh is not a great idea but thats how i wanted them to be i thought it was possible to just render the parts that are seen and dont even pass the data that wont be seen after all it would be a waste of processing time!
since this meshes are due to be seen quite closely and they have minimum 60.000 to 300.000 tris id like to occlude what is not seen hope it makes sense
the reson of such high polycounts is that they are scanned from real life (and even being optimized they are very heavy and all that data must be retained yet be able to visualize it fast) sometimes theres even more triangle count than that or even 2 meshes at the same time this is why im looking for an occlusion or culling method that would hopefully speed things up.
Do you send the geometry more than once?
Are you trying to cut down on geometry, or just trying for a speed increase? (sounds like the former, from your posts).
Occlusion query basically tells you how many pixels would get drawn by drawing calls between the Begin and End Occlusion calls.
Number of meshes or number of objects doesn’t actually matter, so you could (in theory) use it to find out if one portion of a mesh occluded another.
But you’d still need to send all the geometry, at least once…
In order to avoid sending geometry, you need to do some kind of culling in your program, prior to drawing.
This is basically hidden surface determination, or shadowing, and they’re generally either approximate, or slow, in software, but it might be worth looking them up…
well ive been having trouble with this and with animated meshes
my problem is that i have too much data and i cant get rid of it by optimizing the meshes since they are pure raw data that must be hold as it is i cant optimize them anymore they are up to a point where if i optimize a little more the shapes start changing from the real world object…
Can I ask what the objects are?
And what the typical viewing situations are - ie, are they always in close up, etc?
How complex are they -ie, how many times might the same object cover the same pixel?
As far as I know all the occlusion mecanisms are applied during the rendering pipeline and that means that the model was already sent to the graphic card. So I think that using occlusion will not solve your problem.
If you don’t want to send non-visible poligons to the graphics card, you will have to discard them “by hand” by comparing each polygon’s normal vector with the view vector of the camera, but this can became more expensive than sending all polygons to the renderer.
Another option is to use display list or vertex buffers (see NeHe’s tutorials) to make a cache of that data and avoid sending that huge amount of vertices through the graphics bus.
Also, you can organize the model in several sub-models and send only the one that is currently display (for example, if your model is a humanoid one and you are looking at its face, you can discard al non-visible models, like the legs and arms, etc.)