Any chance of a full screen textured quad extension? I know this may sound trivial, but it’s annoying having to switch to ortho mode to do a full-screen quad correctly.
Also, it could completely bypass clipping, lighting and transform, and basically just jump straight to rasterization.
I notice in Nvidia’s latest docs about in-game special effects, they mention full-screen post processes becoming more used with increased in fill-rate. These kind of effects alwasy use full-screen quads.
It would be top if we could just do a single function call, and it would blast the screen with the current bound textures, with a predefined set of texture coords, so we dont even have to specify those either.
I know the performance increase would probably be neglibable… (or maybe not if alot of the setup pipline is skipped, though it’s only 4 vertices…) but it would be nice…
When I do something similary, I have problems with what coordinates to pass within the texture, as well as for the quad’s edges.
I can’t pass [0,1] for texture coordinates, because with texture filtering, that will filter with the border or with the repeat. Thus, I have to adjust the texture coordinate in by half a pixel on each side.
Then comes the question: should I pass [0,1] for the quad, or should I pass the center of the corner pixels for the quad, too? If I pass the entire quad, then in effect, my texture will be stretched by one pixel across the screen. If I pass the center of the corner pixels, it seems as if it would match up, but then my GF2MX drops one pixel on the right (or left, depending on where I wiggle it).
So far, I’ve found mostly joy by adjusting the right/top coordinate in by half a pixel, and the left/bottom coordinate not at all, which makes no sense whatsoever.
Someone, please enlighten me on what the a) the right behaviour is supposed to be and b) thus what the right thing to do is.
V-Man, I can’t see why splitting the quad up could possibly improve performance. Apart from break up the fill-rate a bit.
Unless the framebuffer is block arranged, and rendering to litte small squares at a time, rather than entire raster lines, is better for the GFX cache?? Perhaps better for texture cache also…
I’ll do some timings on this at home.
JWatte, never noticed a problem myself, but then I dont pay much attention to edges of stuff, maybe I should…
I assume glVertex2f is the same as glVertex3f(x, y, 0.0f)?
Like it is with Texture coords. Which means it wont pass the clipping plane, and it just doesn’t work at all. You at least need to setup an ortho projection matrix, so that a Z value that clears the near clipping plane, doesn’t scale the quad.
Or am I being lame or something? I was thinking of using a VP, that justs writes out HPOS position, and avoid using any matrices. Should be quite quick. I assume HPOS goes from -1,-1, to 1,1? or something similar…
I have to wonder though, why get rid of glOrtho? You are still changing both the projection and modelview matrices so I fail to see what you have really achieved other than making the code harder to read. When using glOrtho its more obvious what you are doing.
matrix_multf( mat, m, (MAT_FLAG_GENERAL_SCALE|MAT_FLAG_TRANSLATION));
}
(taken from the mesa source code)
All this will be done on the CPU, as only the final matrix is uploaded to the GPU…
so, we’re saving a fair amount of calculation. Reason enough, I would have thought.
[edit]
Also, you no longer need to know the dimensions of the viewport to draw the quad…makes the code a bit tidier.
[This message has been edited by knackered (edited 04-22-2002).]
Couldn’t you just save that matrix after creating the RC and then load it instead of the identity matrix? That would eliminate all that code. Its what I do. As far as having to know what the viewport dimensions are, well, what’s wrong with that? You have to know that anyway, so I don’t see what you gain from using [-1…1] instead of [0, width] (if you wanted to use that range, nothing stopping you from using [-1…1] with glOrtho), etc.
Originally posted by DFrey: Couldn’t you just save that matrix after creating the RC and then load it instead of the identity matrix? That would eliminate all that code. Its what I do. As far as having to know what the viewport dimensions are, well, what’s wrong with that? You have to know that anyway, so I don’t see what you gain from using [-1…1] instead of [0, width] (if you wanted to use that range, nothing stopping you from using [-1…1] with glOrtho), etc.
the clue is he does NOT need to know the viewport, as left-right goes from -1 to 1 if you simply load the identities, and that always for the current viewport… now thats cool, isn’t it?
kieran, am I splitting hairs? Of course. That’s no mystery to me.
“… does NOT need to know the viewport …”
But that is not true! The viewport is a fundamental element of the render context. And again, there is nothing stopping one from using [-1,1] range with glOrtho. My argument centers on the readibility of the code, not its speed.
when you set all your matrices to identity, gl spec defines the viewport for glVertex and other functions that sets positions to -1 = left, 1 = right…
that means you don’t need to know the size of your currently glViewport… wich helps a lot if you dont know it and you would need to query it first with a glget, for example…
well ok, glRectf does not send good texcoords as far as i know but for fullscreen drawing of a simple quad its usefull (for stencilshadows for example i used this one time