I have seen this before, but will just check.
The quickest way to display a bmp is as a texture on a poly, as opposed to drawpixels etc.
Then obviously use translate, scale, and rotate to position it in the window.
Whats the quickest way to clip the edges, I can think of a few but I don’t think they will be particularly quick.
Either add additional clippingplanes, or simply modify the polygons vertex-/texture coordinates to simulate clipping.
use GL_SCISSOR_TEST and specify the rectangle
in window coordinates you wish to use.
This function also limits glClear etc to this rect.
Hope this helps.
my images are not byte^x squared dimensions so I have padded them, and altered the poly/tex binding appropriately. This seems ok but as images get larger then the more padding is needed, at what sort of stage would it be better to revert to drawpixels, or never?
Out of curiosity, how can one align and scale a textured quad to fit the screen “exactly?” Is there some math involved? Also, what is a reasonable texture size that I can use for a fullscreen 2d image? I assume that I couldn’t just use a 1024x1024 pixel image, could I? Or would I have to break it up?
[This message has been edited by gtada (edited 06-01-2001).]
You can use any size texture that you like as long as the target GL implementation supports textures to that size. As implementations differ you had best use glGet with GL_MAX_TEXTURE_SIZE to return the max dimensions of textures supported on your implementation, friends/customers machine. GeForce or better NVidia cards support 2048x2048 and also support textures of non-equal dimensions via the extension NV_texture_rectangle.
I assume that non equal dim textures are not supported on too many cards? And you have to specify, via the card, thatr it isn’t. It would make senes for opengl to query the card when you specify the dims of the texture…
gtada - check the faq link first page right side. IIRC section 9
So I load my jpg files and tex map them, clip em etc…
Now I want to draw on them using the mouse. Best way to do this, sorry quickest way? The obvious way is to have a big array of x/y points and then loop through this array and draw the points/lines. Would it be beter to set up a buffer and then do logic ops with the back buffer.
Just wanted to clear up what sounds like a misconception in some of the responses in this thread. Textures do not have to be square. As long as both width and height are a power of two, and neither of them is bigger than the GL_MAX_TEXTURE_SIZE. The extension NV_texture_rectangle can be used to create textures that aren’t a power of 2. You don’t need it if you were going to create a texture that was, say 1024x256.