Direct acces to tetxure memory & drawing buffers

Thanks for the info. Matt. I guess that means that I shouldn’t be holding my breath for texture borders (What do people do with textured spheres?)

You say that current NV hardware can do non-^2 texturing already; would there be any performance hit or texmem-usage inefficiency involved?

A sphere… can you fit it all into one texture and use GL_REPEAT? If so, there shouldn’t be a problem.

Depending on the usage, there may or may not be a performance hit. I’d expect it to be a memory savings because previously you would have had to pad such a texture to the next power of two.

  • Matt

Originally posted by mcraighead:
A sphere… can you fit it all into one texture and use GL_REPEAT? If so, there shouldn’t be a problem.

No. It’s fundamentally impossible to draw a sphere mesh properly with a single texture. I think you’re suggesting something like wrapping a Mercator projection onto a globe; this doesn’t work, because you end up trying to map rectangular areas of the texture (around each pole) to triangular areas of the model. Big, obvious, horrible discontinuities.

This is why texture borders are such a big deal for me. An alternative would be to enhance OpenGL quad rendering so that texcoords are interpolated between the four corners, instead of splitting the quad into two tris, but I’m not holding my breath for that…

About having a pointer to the framebuffer:
Isn’t this kind of design just the kind of thing that todays CPUs and OSs trying to avoid? I mean when an application is running, every memory reference is verified by the general processor before the operation is performed.

How can any video card vendor override this?
Even DDraw doesn’t give you a pointer (or does it?)

It is absolutely possible for us to give applications a pointer to video memory. Video memory is uncached, though, and so there are essentially no safeguards in place to make sure you know what you are doing.

In fact, wglAllocateMemoryNV (from NV_vertex_array_range) can allocate video memory and can provide your application with a pointer to video memory.

  • Matt

Very interesting. I’ve never heard of that extension. What’s it for and how can it be used?

Can someone just do
pointer[0]=255;

if those are byte adresses. Or does it have to be a DWORD adresses.

V-man

Everyone’s saying that texturing is too slow, especially when reading them off of disk. Why not use Texture objects? Or am I just too naive?

(Let me guess, the second one…)

How do you use wglAllocateMemoryNV? And once you get the pointer, how do you write to it? I heard someone mention DirectDraw, but I hate DD with a passion… so unnecesarily complicated. And doesnt DirectDraw deal with pointers directly to the video memory or something more direct like that anyway?

What’s the quickest way to do procedural textures then ?

I’m using them for clouds, which should be updated once per-frame. A large chunk of the frame-time is used to download the computed texture (because the texture is big). If the memory I filled with the cloud texture was on the card already, this wouldn’t be an issue.

I’m trying out another algorithm to render an approximation using small textured quads, and glCopyTex… to get them into the final texture. This doesn’t look good at the moment.

So, this would be a good reason to allow access to some “closer to the metal” texture buffer.

We are working on better ways of dealing with dynamic textures. More news when it becomes available.

Today, I’d recommend computing the texture on the CPU, unless you can be more efficient by rendering it. (Environment cubemaps are best rendered, while cubemaps that evaluate some mathematical function are often best computed on the CPU, for example.)

  • Matt

You can’t assume to know how video memory is being used. In OS X for example all frame buffers are shared using a common context across VRAM and main memory. There is no ‘pointer’ to the texture until you move it to main memory then gain control over the context, otherwise only the kernel knows where it is. The GL Driver manages it’s own buffers using the same kmod but if you try to manage these data structures directly:

a) You’ll be a lot slower at it

b) You have no idea what format they are in after they are translated into the format the GPU can handle.

The best you can ask for is a method to prefetch textures and such, which I think is already in there.