I’m just displaying images using glDrawPixels that are an array of unsigned bytes. My code was working fine for 512x512 images until I ran into a weird problem of skewed images for other sizes. After spending a lot of time assuming it was my code, I read in the documentation that glDrawPixels skips certain bytes based on specifications in glPixelStore*.
Out of curiosity, why is that done? You have a pointer to an array where you have specified the size of each element, the width and the height. That’s enough to iterate through the data. Why skip bytes at row’s end and mess things up? If I specified that I’m rendering GL_LUMINANCE of GL_UNSIGNED_BYTE that should be enough to know that you skip row * width * sizeof( BYTE ) for each row.
Out of curiosity, why is that done?
Because it allows users to choose not to pack the data tightly. Or you can upload only the green component of a texture as luminance. Things like that.
BTW, if you want anything approaching performance, you should avoid glDrawPixels entirely.
What function would you suggest to draw a pixel buffer on the screen from openGL?
The one you’re already using; glDrawPixels. If the default unpack settings doesn’t suit you, then changed them. Just setting the unpack alignment to 1 is enough to get what you want in all cases.
If it is static image make your image into a texture and draw a quad. If it will be changing from time to time, use the texture method.
If it will change every frame, use glDrawPixels.
Also, use a good format like GL_BGRA. The others are slower.
Thanks for the input. I have actually started using textures. The reason is I’m trying to display slices of images which are aligned. I wanted to allow looking through them from any orientation. Originally I was going to interpolate the values myself and write them to screen but then I found a great extension - glTexImage3D. This does exactly what I want and I don’t have to worry about sampling and a bunch of other things.
My values are grayscale. So I’ve been passing GL_LUMINANCE where ever format is specified. Unfortunately openGL seems to be doing some sort of blending because when I render the texture onto a quad I can see all the image slices (great for volume rendering sort of thing, but that’s not what I want). Any suggestions? I know it’s probably a setting somewhere.
it should not blend ‘slices’. check you really sample at exact texel coordinates.
Strangely enough it does.
I have 2 test codes:
glVertex3f(0, 512, 0);
glVertex3f(512, 512, 0);
(GLvoid *) (texture3D + 512*512*currSliceIndex));
texture3D is a pointer to my data from which I created the texture.
the glDrawPixels iterates through the slices as expected. However the commented code where the texture is applied to the quad doesn’t. I can see the last and the first slices rendered onto the quad (not sure about the other ones).
I’m just trying things out so excuse the poor coding.
If you want to hit exactly on a slice, currSliceIndex/numSlices won’t give you the correct texture coordinate.
For example, if currSliceIndex is 0, I assume you expect the first slice to be rendered. This is not the case. At coordinate 0, you are on the very edge of the texture, meaning you get a 50/50 blending blending of the first and last slice assuming you’re repeating the texture (instead of clamping it).
To hit the slice exactly in the center, where the slice image is defined, you need to offset the slice index by 0.5 before normalizing.
Same for all texture coordinates.
Thanks, that is exactly it. That’s why it looked “blended” when it fact it wasn’t. I saw the back and the front on the first image.
This forum is great by the way. Always helpful.