Vertex_Array_Range Ext. under linux

I read the nvidia VAR_demo, and want to use this extension under linux. The problem is, that i need a function called glXMemoryNV, but don’t know how to obtain it. I have the latest nvidia drivers, but in the glx.h there is no prototype for a so called function. Do i have to retrieve a pointer to that function, as i do when using OpengGl extension? Does anybody did this before?

You do not have the names right. It should be glXAllocateMemoryNV and glXFreeMemoryNV and they are on the lines 108 and 110 in glx.h

You can use both static linking and glXGetProcAddressARB for extensions with the nvidia OpenGL for linux.

Oh, thanks for the information. Did you use VAR ext before? Perhaps you have any sourcecode to get an idea how it works? (I have the nvidia VAR demo, but the sourcecode looks a bit complicated)

I had a problem with VAR under linux: it didnt work. I have portable app and under win32 I get a huge speedup when using VAR, under linux no change at all. Don’t know if its a problem with my specific system configuration(maybe my kernel doesnt support DMA access) or VAR calles are ignored under linux.

anyway, here are some points:

allocate only one big chunk of AGP or Videa memory and place all arrays except the indices into that memory.

set the vertex array range over the whole memory you allocated

enable VAR

now draw the arrays as always.

-Lev

I know that this is a problem with some hardware but if it works in Windows can perhaps selecting another AGP driver help?
On most system can you have nvidias or agpgart.

Does anyone has an idea on how to use glXAllocateMemoryNV under Linux ?
Must I use Extensions ?

thanks

You can either use static linking, as nvidia_linux said, or can use extensions, here’s how I do it:

#ifdef _WIN32
#include <windows.h>
#else
#include <GL/glx.h>
#endif

#ifdef _WIN32
typedef void* (APIENTRY * PFNWGLALLOCATEMEMORYNVPROC) (GLsizei size, GLfloat readFrequency, GLfloat writeFrequency, GLfloat priority);
typedef void (APIENTRY * PFNWGLFREEMEMORYNVPROC) (void *pointer);
#else
typedef void* ( * PFNGLXALLOCATEMEMORYNVPROC) (GLsizei size, GLfloat readFrequency, GLfloat writeFrequency, GLfloat priority);
typedef void ( * PFNGLXFREEMEMORYNVPROC) (void *pointer);
#endif _WIN32

#ifdef _WIN32
PFNWGLALLOCATEMEMORYNVPROC        wglAllocateMemoryNV = 0;
PFNWGLFREEMEMORYNVPROC            wglFreeMemoryNV = 0;
#else
PFNGLXALLOCATEMEMORYNVPROC        glXAllocateMemoryNV = 0;
PFNGLXFREEMEMORYNVPROC            glXFreeMemoryNV = 0;
#endif

...
...

// this is the allocation function
#ifdef _WIN32
    wglAllocateMemoryNV = (PFNWGLALLOCATEMEMORYNVPROC)wglGetProcAddress("wglAllocateMemoryNV");
    wglFreeMemoryNV = (PFNWGLFREEMEMORYNVPROC)wglGetProcAddress("wglFreeMemoryNV");
#else
    glXAllocateMemoryNV = (PFNGLXALLOCATEMEMORYNVPROC)glXGetProcAddressARB((byte*)"glXAllocateMemoryNV");
    glXFreeMemoryNV = (PFNGLXFREEMEMORYNVPROC)glXGetProcAddressARB((byte*)"glXFreeMemoryNV");
#endif
    
#ifdef _WIN32
    if ((wglAllocateMemoryNV != 0) && (wglFreeMemoryNV != 0) && (QueryExtension(NV_vertex_array_range)))
        m_memory = (byte*)wglAllocateMemoryNV(size, 0.2f, 0.2f, 0.5f);
#else
    if ((glXAllocateMemoryNV != 0) && (glXFreeMemoryNV != 0) && (QueryExtension(NV_vertex_array_range))
        m_memory = (byte*)glXAllocateMemoryNV(size, 0.2f, 0.2f, 0.5f);
#endif
    if (m_memory == 0)
    {
        //StatusMsg("Using system memory, could not allocate %d bytes of AGP memory
", size);
        m_fastMemory = false;
        m_memory = (byte*)malloc(size);
        if (m_memory == 0)
            ErrorMsg("LMemoryManager::SetRequiredMemorySize: could not allocate %d bytes of memory
", size);
    }
    else
        m_fastMemory = true;
    m_memorySize = size;
    if ((QueryExtension(NV_vertex_array_range)) && (m_fastMemory))
    {
        glVertexArrayRangeNV(m_memorySize, m_memory);
        glEnableClientState(GL_VERTEX_ARRAY_RANGE_NV);
    }

Of course you must initialize glVertexArrayRangeNV function properly with glXGetProcAddressARB

-Lev

[This message has been edited by Lev (edited 01-20-2002).]

Be sure the NVidia module does activate agp 2x or 4x. For exemple on VIA chipset you should add “option NVdriver NVreg_EnableVia4x=1” to your /etc/modules.conf. See also NVreg_EnableALiAGP, NVreg_EnableAGPSBA, NVreg_EnableAGPFW.

For a long time VAR just slow down much my engine until I played with those options, resulting in a good speed-up.

thanks for all,

and i think i must to such a way for all the ext and var ?

now, i don’t understand the last parameters of this function (allocate). Lev, you used 0.2, 0.2, 0.5. what is the difference if i use 0.5, 0.5, 0.2 ?

Thanks for your help Lev.
But I didn’t succeed on allocating. So i have to ask questions again:
Here is what i got while trying that:
glXAllocateMemoryNV always return NULL
It seems that GL_NV_vertex_array_range is not supported
since glIsEnabled( GL_NV_vertex_array_range) returns false and
GL_NV_vertex_array_range defined at 1 in the headers,
and QueryExtension() doesn’t exist (in your source).

Otherwise, it may be due to gl 1.3 or glx 1.3 (i’ve got only 1.2). But i’m
not sure of that.

I found this is not due to 1.2 or 1.3.

glXAllocateMemoryNV is not equal to 0, so I think it’s correctly linked with the EXT.

I don’t think this is glut because it uses glx internally.

I surely have forgotten something important that i unknow.
I haven’t installed the AGP drivers for my chipset (via), but i haven’t tell my system to use it as tfpsly said. So, if AGP not supported, i think it allocates with another manner. But I may be wrong.
Tell me.

If you do not use NVreg_EnableVia4x, the memory allocation will still work, but access to it from the video card will be very slow. Resulting in a slow down of your engine.

OK,

i still haven’t got the drivers for agp.
But you said it isn’t important for getting that working. So, it seems not the problem.

And the allocation always fail. I tried everything but no succeed.

then, glGetString( GL_EXTENSIONS); returns a null string. However, glxinfo returns that all what’s needed is supported. what a strange thing in fact !!
I couldn’t get where is the problem.
glXAllocateMemoryNV is not null, as glXFreeMemoryNV and glVertexArrayRangeNV, but all have got the same adress (as 0x80700400 but unsure).

JD

> i still haven’t got the drivers for agp.

>But you said it isn’t important for getting that working. So, it seems not the problem.

Who the hell said that ???
You do need them in order to use AGP memory! Go for them and install them at once or you’ll die in horrible suffering in the trolls third hell

I mean, you are to use the NVidia module (agp driver) from NVidia, and the nvidia X11 driver module. Maybe you were talking about the driver from the kernel, in which case, well, what I have just said is useless =)

this is not useless, you were right.

i may misunderstood what someone told me.
but it wasn’t you!

JD

i were on the via site and asus site, but i haven’t found any driver for my motherboard, nor for 4in1, nor just for AGP for Linux.
only windozes ones.

my motherboard is an ASUS A7V266 for athlon XP+. any idea on where can i find them ?

thanks.

JD

The kernel has a AGP driver and also the nvidia driver comes with a AGP driver for some hardware. This is enabled by default. What do you get from
cat /proc/nv/card0

hello,

i’ve tried many things yesterday evening.
First, i was false on calling glGetString(), it was before glutCreateWindows, so the context wasn’t created when
calling. that’s why it gaves me a null string.

so, now glGetString( GL_EXTENSIONS); gives me many thing: it supports ARB, NV, EXT, SGIS, IBM, KTX
(under Linux). Of course, vertex_array_range, vertex_array_range2, vertex_array_program and
draw_range_element. This was under glut.
Under glx (i’ve tried a glx demo program), no array, no draw extensions are supported (maybe because it uses
Mesa).
So, i understand less why it doesn’t work.

I have tried all the values with all combinations for x,y,z in glXAllocateMemoryNV( 1000*sizeof( GLfloat), x,y,z);
It didn’t work.

Concerning glGetString( GL_RENDERER);
under glut, it returns GeForce2 MX/PCI/3DNOW!
under glx, it returns Mesa X11

do you think i have to recompile glut ? my version is the originale provided by Mandrake (so, uses Mesa).

When trying to allocate under the glxdemo program, it stops with a segmentation fault in the
glXAllocateMemoryNV link with glGetProcAddressARB.

any ideas are welcome.

thanks

JD

You do not have to recompile GLUT and you only have to include the check if you are distribituing your program. You know that it is supported on your card. Do you call it after glutMainLoop?