At the momemt i am writing a vertex-array-memory-manager. I use VAR (and VAR2). Now i am wondering (or wandering?) if i should allocate one big memory chunk, or if i should use more, but smaller chunks.
Using smaller chunks is easier to handle (though it shouldn´t be that hard to use one chunk). But another much more important point is this: If i had 5 MB of VRAM left (after uploading my textures), but i need 5.5 MB of memory to store ALL objects, i have to fall back to AGP or system-memory (I know, AGP isn´t really slower, but it´s only an example here).
Now if i split my stuff into level-data, model-data, and so on, i will be able to put say 4 MB into VRAM and only the remaining 1.5 MB into AGP / system-memory.
However if i draw the stuff i will have to change the VertexArrayRange everytime i draw another object (and even with VAR2 this implies a flush, as far as i know).
So i´d like to ask guys who worked with this before and know more about the advantages / disadvantages and how the performance will be.
Also i´d like to use fences. Can they be used to use different Vertex-Arrays and preventing flushes?
Think of VAR buffers as texture objects. How does the GL driver deal with texture objects?
It prioritizes(?) them in a queue. It will try to store them all in fastmemory(agp/video) if it can, but if it hasn’t enough, then it will drop some back into system memory to make room for a higher priority texture. Basically, where’s the sense in keeping all your vertex data in fastmemory even though only a small portion of it will be actually visible at any one time?
Read up on texture priorities, and apply the same methodology to your vertex buffers.
Doesn’t nVidia say that switching VAR arrays is extremely slow? Kinda making it impossible to have several differently allocated arrays?
The recommendation is to allocate one large AGP chunk, and then dole pieces of that out to users using some API that you construct.
VAR is explicitly NOT like texture objects. The ARB_vertex_array_object (or whatever it’s called) extension being worked on will switch to model to more object-like. Meanwhile, ATI already has an object-like vertex array object extension. But for nVIDIA hardware, you have to write your own memory manager (of some sort).
I’m well aware that VAR is not like texture objects, jwatte. My point is that you should use the VAR mechanism in the same way texture objects are used - ie. write your own prioritizing mechanism using VAR simply to allocate the initial memory pools.
Jan2000 seemed to be under the impression that it’s practical to keep all his vertex data in fastmem, when this of course is not practical for databases of any decent size.
I am also working on vertex buffers. My problem is to design proper interface. Now i only support VAR. The class is a template that takes two sets of flags. One describes the contents of vertex buffer, eg. xyz | st | normal. And it is good for me now. The second is more problematic - it describes the style of the buffer: type of memory (video, agp, system), usage of memory (static, dynamic, multipass). Do I miss sth here?
Whats the future of VAs? Will my interface be compatible with ATI_VAO and ARB_VAO?
And what about updates and rendering? To update a vertex buffer I could provide void *lock () or void *lock (int first, int count). Whats better? And for rendering: is it better to include indices into vertex buffer or made the user care about it? In other words - is it possible, that future API could allow me to store the indices in fast memory?
[This message has been edited by MichaelK (edited 12-11-2002).]
Originally posted by MichaelK:
[b]In other words - is it possible, that future API could allow me to store the indices in fast memory?
Well, d3d has left this open to the driver developer by having a seperate interface for index buffers - you deal with them in the same way as vertex buffers, essentially (lock/unlock). So, I would assume that at some point index buffers will make their way into fast memory.