I have a technical question for the experts in here:
I am developing an OpenGL application for iPhone using openGL 1.x .
When i render a big item on the device it become obviously slower and that’s seem quite normal to me.
But i would have expected an increase of framerate when the object is highly zoomed in as the majority of the vertexes are outside the viewport’s bounds.
I also try with glEnable(GL_SCISSOR_TEST)/glScissor(0,0,320,480) but the result is always the same?
This is really driving me crazy!
Why the not shown triangles are always computed(or why the speed remain the same while drawing a smaller portion of the object)?
There is a way to avoid this and subsequently increase the framerate?
In most 3D applications - whether under OpenGL, OpenGLES, WebGL or Direct3D, there are four principle places where speed bottlenecks can happen:
The CPU - you have to calculate what meshes you’re going to draw and set them up for rendering. This tends to be more or less a fixed cost per mesh.
The transmission link between CPU and GPU which is a cost that depends on the number of vertices you send multiplied by the number of per-vertex attributes - plus the cost of updating textures and shaders that you might change during that frame.
The GPU’s vertex processor. This is the per-vertex processing cost of transforming/lighting your vertex data - without shaders, the cost roughly depends on the number of vertices times the number of lights you have turned on - with shaders, the cost roughly depends on the number of vertices times the complexity of your shader.
The GPU’s pixel/fragment processor. This is the per-pixel cost for pixels that pass clipping. The cost roughly depends on the number of pixels you draw onto the screen times the number of textures you use and/or the complexity of your fragment shader.
If your application is running slower than you’d hoped - then you need to establish which of those four things is the problem.
Pixel processing time is easy to understand - reduce the size of the window you’re rendering to (keeping everything else the same). If your program goes faster in rough proportion to the area of the window (height x width) - then pixel processing is the bottleneck.
If you have eliminated that then since CPU time generally doesn’t depend on the number of vertices you draw, then you can (just as a test) keep rendering to a tiny window (to more or less eliminate pixel processing costs) and deliberately halve the number of triangles in each mesh. If your application’s performance increases by roughly a factor of two then you were obviously not limited by the CPU’s per-mesh costs - if your performance hardly changes - then probably you’re drawing too many objects or doing too much per-mesh work in the CPU and you need to improve your code somehow.
Figuring out whether the transmission costs (2) or the GPU’s vertex processing (3) costs are your problem is tricky - and since both depend mostly on number of vertices, you probably don’t need to.
In your case, when you zoom into an object, (1), (2) and (3) aren’t changing - you’re drawing the same number of objects and the same number of vertices. Admittedly, entire triangles that are off the edge of the screen are being discarded - but they still have to be sent to the GPU and transformed in the vertex processor - so zooming in doesn’t affect (1), (2) and (3). However, when you’re zoomed in close, (4) becomes a much bigger cost because the object is much larger…it’s covering more pixels.
That is the explanation for what’s going on here.
There are a lot of sneaky tricks you can do on high-end GPU’s to help this situation - sadly, I’m not familiar enough with the iPhone’s GPU to know what will work best.