Solaris and pixels > 8 bits

I have a simple glut program that creates a 512 x 1024 10 bit greylevel wedge image (max grey 1023) using GL_UNSIGNED_SHORTs. On Windows, I can use the SCALE operator (set to 64.0) to scale the max grey to 65535. On Solaris, this same code creates a black window. If I do the scaling in software, Solaris correctly displays the wedge. For 8 bit pixels, the Solaris BIAS and SCALE operators work correctly. Is there a fix (besides not running on Solaris)?

I found the “Sun OpenGL 1.3 for Solaris Implementation and Performance Guide”, and it always references GL_SHORT for 2 byte pixels, not GL_UNSIGNED_SHORT. Changing to this caused the black to go away. If I adjust my scale to match the display of the software scaled display, it has an overall similar display, but there is a noticeable banding, i.e, there are vertical columns of similar pixel greylevels. Is the Solaris pipeline clipping to 0-255 at the scale step instead of the display step? And the scale factor that produces the best looking match is 5.7!! I would have thought that this should have been 32.0 (to take a maximum value of 1023 to 32767). Help! Does anybody have any experience with Solaris and OpenGL???

I have finally got it working. If there’s anybody else out there working with image processing with > 8 bit per pixel images on Sun machines and “dissapointed” with using GL_RED_SCALE, etc.), you can email me at “paul.maynard at ngc.com” to find out how I got hardware accelerated scaling to work. I’m doing it this way since we seem to be a small minority of OpenGL coders (based on the number of helpful hints I got …), and I’d like to know someone else I can talk to.