I have a program that draws mesh with a set of nodes and lines between nodes. When the number of nodes becomes too large, the whole screen becomes empty. I think OpenGL simply reaches the limit of the hardware memory. Is there any way that can detect when this situation happens? This program can be run among different PC’s with different capacity. Your help is appreciated.
I think it’s quite hard to get that info from OpenGL.
I played with some code to register memory for textures a while ago to try and get round this problem… The theory was that when you are denied a texture then you can assume roughly that the memory left is less than the size you have requested. You could then in theory do subsequent requests with varying sizes based on a binary chop. Obviously this is not elegant or efficient, but usable for sand-boxing. There is a note about it on this site and if I find the link I’ll pop back and post it.
Another problem with the above solution is that some implementations will always allow the texture to be allocated, but switch it in and out of client space when it’s needed, giving you the illusion of ‘infinite’ GPU storage space!
Another method is kind of touch and go in terms of accuracy depending on the effort you put in and the knowledge you have of specific GPU setups. That is to keep a log on the client side of what is allocated and de-allocated.
From my understanding a lot of this is because what actually happens on the GPU side in terms of memory management is implementation specific. So giving that info at driver level is not really something that is easy to manage across all platforms. But that does sound a bit like FUD to me.
But I agree that being able to simply ask the GPU for this info would be very very cool. To date I am not aware of a way… Perhaps in the fabled OpenGL3.0, but I doubt it.
Anyone else got other info, as I am interested in solutions to this too?
Well, for speed, your VBO’s should be less than a certain size (1-4Mbyte I read somewhere), so if split up the meshes into VBO “chunks”, then won’t this be a non-issue?
Then again, I might be rambling
My program is part of another big program, which consumes different big amount of memory at different times. So the memory left for my program varies along the program execution. The best solution for me is to detect when drawing disappears due to lack of memory and then issue an error message to warn users. Is it possible with OpenGL? It’s impractical to deal with graphic cards since the program can be used on many, very different Windows machines.
Thanks for any help.
Are you at least calling glGetError to detect problems?
You do know that if it runs out of VRAM then there will just be swapping between VRAM and RAM and also hard disk swaps some into play.
The best solution for me is to detect when drawing disappears due to lack of memory and then issue an error message to warn users
Why should the drawing dissapear? As V-man stated, the driver/OS will just use virtual/pageable memory. If it absolutely can’t keep up with your application appetite, well, you will detect it very easily it will – with great probability – crash.
Not really helpful for this problem I guess, but I thought I’d put it out there as I only just found this out…
One of the helpful guys at ATI pointed out to me that on OS X at least using the OpenGL Profiler you can pull up a really comprehensive realtime graph showing texture allocation, memory usage etc. etc.