I have some small background in OpenGL programming, and looked into OpenGL ES as well as into GLSL during the last few days.
Cross-developing on a Windows system (MSVC) for a embedded system (Tegra3 chip, WinCE 7.0), which is also able to run Qt 4.8 + GL ES 2.0.
Goal 1: De-interlacing image with inverted odd lines
I have a big chunk of RAM, which contains a rasterized 16-bit image of a defined width and height. Unfortunately, each odd line (O) of that image needs to be shifted to the right by some pixels, while each even line (E) needs to be shifted to the left AND inverted on x axis:
then shift lines …
E <<<: 23456789??
O >>>: ??98765432
then invert odd lines …
I’d like to process those pixels as fast as possible to achieve high refresh rates with low CPU usage. Is that something I’m able to achieve with GL ES and/or shaders?
Goal 2: Contrast / brightness adjustment (automatic)
The input data contain 16-bit brightness values, which have to be adjusted (automatically) to 8-bit values.
I guess I have to compute the brightness offset and contrast multiplier values by myself outside of GL? Or is it possible to achieve the same thing by a fragment shader?
After I got those values, it should be possible to process each “pixel” with a fragment shader. But how do I handle the conversion of 16-bit per pixel input data to RGB output format?
Goal 3: “Bloated” input signal layout
My input data (host memory) contains up to four signals per pixel with 16 bit per pixel:
Byte 0…1 = signal 1
Byte 2…3 = signal 2
Byte 4…5 = signal 3
Byte 6…7 = signal 4
I’d like to perform some simple math on those signals, for example:
Red = Signal 1
Green = Signal 2
Blue = Signal 3+4
Is OpenGL capable to handle those formats up to shader level?
Thank you for reading,