This may be more or a CV question than GL, but since I’m tyring to make the two kits work together, I’ll ask anyways, and let the flames fall where they may.
I’m trying to pick out the necessary information from an IplImage (look up IPL and/or OpenCV), and use it as an OpenGL texture. The image I’ve loaded (supposed to be from a live WebCam) appears to be pure garbage. I’ve since found that the CV structure I was using creates the image in BGR format, and I was trying to write the texture in OpenGL in RGB format.
However, I doubt this is the sole cause of my problems. Any thoughts or advice on how to make this idea work? Or better yet, any BETTER ideas?
Yes, this is a CV question. You should look into how the data is stored. Have you considered things like byte alignment? (it seems as if CV uses 4 or 8 byte alignment). You should dive in to some documentation of the IplImage type (what does each field in the struct really mean?).
By “complete garbage”, do you mean that it’s like pure noise, or can you still see the structure of the image (but with the wrong colors and line/pixel shifting effects?). If it’s noise, then there’s probably some kind of compression or similar involved (unlikely). Otherwise it’s probably a combination of: color ordering, byte alignment and (possibly) row padding. Color interleaving is also a possibility: are colors interleaved on a per-pixel, per-row or per-image basis? (OpenGL and most image formats use per-pixel interleaving)
It COULD also be that the data is stored in bitplanes, which was commonly used on old hardware that could not display more than 4-16 colors, for instance, but that’s a very remote probability (CV is a modern thing, right?).