how to verify max feasible cubemap size?

For a panorama display program, I need a way to predict the biggest (non-mipmapped) cube texture I can safely load. Many low end systems cannot actually handle the size returned by GL_MAX_CUBE_MAP_TEXTURE_SIZE. I don’t want the user to see screen crashes and error messages.

I know resource management is not OGL’s job; but does anyone know a practical way to deal with this?

Thanks, Tom

You could try to create proxy textures. Though it’s implementation dependent as to how well they test against available resources.

I would suggest against using proxies.
Why?
Because they don’t give you any guarantees, you still have to assume that texture creation can fail even though proxies say its OK.
You can simplify creation process by dropping proxy and just allocating the texture (using smaller sizes until it works).

Also, why wasn’t this mechanism dropped on GL3.0?

You can simplify creation process by dropping proxy and just allocating the texture (using smaller sizes until it works).

That doesn’t guarantee it’s going to work again either. Just because you delete something doesn’t mean that OpenGL has deleted the memory. Nor does it mean that there isn’t fragmentation or that their allocator hasn’t been affected by these successive allocations.

Also, why wasn’t this mechanism dropped on GL3.0?

Probably because it’s the suggested way to find out how large a texture you can allocate. That is, suggested by the IHVs themselves.

Of course it doesn’t. GL is entitled to GL_OUT_OF_MEMORY any time it sees fit.
But the end result is the same without using proxy and you get simpler code, since you don’t use additional entity.

In fact, you may not be sure it worked until you actually successfully draw with the texture.

Whooops, yeah, for broken cubemaps allocation scheme this is true (but not for cube arrays and other targets). My bad.

My advice is when asking for crazy amounts of memory is just don’t even bother. Take the maximum size and just divide by 2. Different drivers will start breaking and giving errors in weird ways when you ask for too much memory.

Thanks, all. So I guess dukey’s advice is still the best :frowning:

At present I have a one time setup procedure that takes GL_MAX_CUBE_MAP_TEXTURE_SIZE and divides it by powers of sqrt(2) until no crash; then stores that as the working max cubeface size for the current machine.

On Windows and Linux machines a reduction factor of 0.707 is usually enough. Older Macs, though, tend to need reductions of 2x or more from the advertised max, and they fail more spectacularly when a bigger cube map is loaded – not just OGL out of memory, but weird displays and eventually a full system crash. I reported this as an OSX bug to Apple a year ago, and they “resolved” it by declaring that there could be no such problem. But I’m not the only one to have observed it.

I do think the 3D graphics community should push for better support of resource management, which I’m sure the guys who write the drivers could provide if the guys who write the standards asked for it nicely.

– Tom