Problem with the 5900Ultra

Here’s what I got with on a Radeon 9000.

Allocating - 800 Mb in 1 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 2 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 800 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1000 Mb in 1 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 2 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 4 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1000 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1200 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1200 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1200 Mb in 4 Blocks
Memory Allocation Block 4 failed
Allocating - 1200 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1200 Mb in 16 Blocks
Memory Allocation Succeeded

Allocating - 1400 Mb in 1 Blocks
Memory Allocation Block 1 failed
Allocating - 1400 Mb in 2 Blocks
Memory Allocation Block 2 failed
Allocating - 1400 Mb in 4 Blocks
Memory Allocation Block 4 failed
Allocating - 1400 Mb in 8 Blocks
Memory Allocation Succeeded
Allocating - 1400 Mb in 16 Blocks
Memory Allocation Succeeded

Thanks PK, so I suppose the conclusion we can draw from this is that it is mainly a windows issue like you said. Although it only seems to be a problem when creating rendering contexts. Maybe I should try a non glut version in case that is affecting things…

[This message has been edited by Adrian (edited 08-25-2003).]

Originally posted by Adrian:
[b]
If I comment out the lines:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
glutCreateWindow(“MemTest”);

I can allocate 2Gb of memory, regardless of the graphics card and how many chunks I split the memory into.
[/b]

What is the exact amount of memory that you are able to allocate without creating the rendering context? You should not be able to allocate 2GB since the OS eats up some of that space as well.


Thanks PK, so I suppose the conclusion we can draw from this is that it is mainly a windows issue like you said. Although it only seems to be a problem when creating rendering contexts.

It’s mostly a 32-bit problem. The other part of the problem is that the driver is loaded in the middle of the app’s address space rather than at the start or the end.

Not sure if that is a driver problem or a Windows problem. I believe the same problem holds for D3D drivers.

I can allocate 2010 million bytes before malloc starts returning a null pointer if I don’t create a rendering context.

For anyone else hitting this problem you can alleviate it by creating the rendering context before allocating memory and also splitting memory allocation up into chunks of ~100mb. I can now use 1200Mb of my 1500Mb.

Adrian,

The initialization of the OpenGL driver does require a fair amount of virtual address space. In general (without a 64-bit OS), applications are limited to somewhere between 3 and 2 gigabytes of virtual address space (depending on your OS).

Libraries such as OpenGL take up a fair amount of virtual address space because mappings must be established for OpenGL’s data structures, mappings to AGP memory, and mappings to video memory.

I suspect you went from a 64MB or 128MB GeForce Ti 4600 card to a 128MB or 256MB GeForce FX card. The mapping necessary for this extra memory along with a larger data structure footprint to handle the extra OpenGL functionality provided by the GeForce FX likely account for your failure to initialize the OpenGL driver.

When NVIDIA’s OpenGL driver fails to initialize, your application falls back to Microsoft’s slow software-only OpenGL driver under Windows (as you have found out).

NVIDIA is aware of that currently the virtual memory consumption by the driver is too great when GeForce cards with big memorys are combined with graphics applications demanding most of the available virtual address space. We are working to reduce the driver’s virtual memory footprint to alleviate the problem you are experiencing.

Keep in mind that the OpenGL library isn’t the only culprit usually. There are often other DLLs that may be consuming large chunks of your virtual address space.

Until NVIDIA can release a driver that reduces OpenGL’s virtual memory footprint, the best advice is to intialize OpenGL early (creating your OpenGL context, with GLUT that means calling glutInit), minimizing the size of your application’s own memory arrays, and checking if other DLLs are consuming an inordinate share of your virtual address space.

I hope this helps.

  • Mark

Thanks for the explanation Mark.

I suspect you went from a 64MB or 128MB GeForce Ti 4600 card to a 128MB or 256MB GeForce FX card.

Yes, I went from a 128Mb 4600 to a 256Mb 5900 Ultra.

For anyone curious as to what I am using 1.5Gb of RAM for, I’m using it for a particle system with 40 Million particles

Seems like its something with the OS(win****). Why not get linux and see how it runs? Ive never had a problem with memory issues or nvidia drivers on linux. glut runs fine on linux and if your new to linux try redhat or mandrake. If you have problems compileing on linux post it in the linux section. Remember to goto www.nvidia.com and get the nvidia drivers.

Originally posted by nukem:
Seems like its something with the OS(win****). Why not get linux and see how it runs? Ive never had a problem with memory issues or nvidia drivers on linux.

You’ll still hit the 32-bit addressable limit. Unless your running a 64-bit flavour of the OS on suitable hardware. It seems the NT kernel likes to load .DLLs in the middle of an apps address space.

Don’t know where “the” Linux kernel likes to load shared objects. I can’t really say “the” Linux kernel either, since there are tons of kernels to choose from. Maybe there is a patch floating around somewhere that does things differently.