GPU Memory management on mobiles

Hello everyone,

I have some questions about GPU memory management on mobile devices android/iOS.
I understand that mobile architecture have usually a shared memory zone between the CPU and the GPU that use the same memory hardware.
My questions are :
Are there internal GPU memories (register, cache) used by a running GPU program which are not accessible from code running on the CPU ?
· If yes, how this register memory is managed when memory space is not enough for running the GPU code ?, does it uses the shared memory space as an extension ?
· Once the shader process in stopped, are those registers set to zero/cleaned ?


It depends on what you mean by “code running on the CPU.” I’m sure that drivers (which run on the CPU) have some way of poking at anything in any particular hardware. But OpenGL ES is an abstraction over hardware-specific stuff. So client code can only access what OpenGL ES lets it access.

OpenGL ES does not operate at the level of “registers” and “cache” and so forth. So you can’t access them.

That really depends on how any particular implementation of OpenGL ES handles it. Generally speaking, if your shader would require more registers than the hardware can handle, odds are good that the compiler will simply fail to compile, rather than trying to export registers out to memory or something.

You’re thinking too low-level. What the driver does with those registers is its business. It will clear them if the next process you issue needs clearing, and if the next process doesn’t need to clear them, then odds are it won’t. But there’s no way to tell what’s going on for any specific implementation.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.