Direct GL_YUV, GL_YCbCr or YCoCg texture support ?

With the first C remplaced by a U and the second by a V, this can be a solution :slight_smile:

@+
Yannoo

The idea is to haven’t to copy/transform the lot of data that various webcams, PCTVs, DVD or others MEPG1/2/4 can output … but DIRECTLY use the YUV output of this to an OpenGL texture …

@+
Yannoo

From what I have read, video card use now busses that are very more speed that a “simple” PCI bus and the DXT texture compression is relatively commun on recent graphics cards … this is perhaps not for nothing …

@+
Yannoo

Can you a second think that :

YUV (PCTV) -> RGB -> CPU -> GPU -> RGB -> YUV (TV screen for example)

can be more speed that a very more simple

YUV (PCTV) -> GPU -> YUV (TV screen)

If yes, please can you explain your logic please ? :slight_smile:

@+
Yannoo

OpenGL is theorically an “Open Graphic Library”, so I don’t see why it can be limited to make only 3D image …

It can be normally used for handle “simple” 2D imaging with various effect too …

Graphics isn’t limited to the 3D demain, so why think that the 3D can deprecated what the OpenGL library have make for a long time before ???

For me, it’s clearly a regression … and certainly not an evolution …

A long time ago, I have found that OpenGL was really more simple to use on various OS / hardware than the DirectX adverse …

In first view, OpenGL have begin now to be more complex to handle than DirectX :frowning:

Perhaps (but perhaps not) some lastest software industries that have join the OpenGL group have something to see with that …

@+
Yannoo

And who talked about CPU-side YUV->RGB conversion ?
mfort rightfully said that fast hardware conversion can be done in a shader, wich will both avoid the CPU conversion step and allow a lot of customization in the conversion.
So :
YUV (PCTV) -> CPU -> GPU with proper shader.

By the way :
GPU -> YUV (TV screen) and GPU -> RGB -> YUV is exactly the same.

Please stop thinking out loud on your keyboard :smiley:

Thank for you response Zbuffer,

And alls my escuses about my “lourdeur” :slight_smile:

Effectively, GPU -> RGB -> YUV is “exactely” the same thing as GPU -> RGB (->YUV)
For example, with a SVGA output, we haven’t to make the last YUV conversion.

No my problem isn’t really at the last RGB->YUV conversion (that is make in hardware), but this is about the input in the GPU

=> why we have to make this with a shader that we have to write ourself and have finally a big program when the graphic card have already all the circuitry for to make this “tranparently” and that we we have only to replace GL_RGB to GL_YUYV in 2 or 3 funcs on an existant program for upgrade it for that it directely handle TV/DVD/MEPGx sources without big modifications ???

For exemple, when I use an glVertex3f(x,y,z), I haven’t to understand exactely what transistor on what part of the GPU handle this … it’s the API that handle normaly this for me … This is such as create a vertex shader for only display a triangle …

With shaders, I have the impression that alls good things that OpenGL have win the first years and for a long time, are now very very fast deprecated … for to be replaced by something that isn’t really standardised and very more hard to handle ?

The vast majority of OpenGL’s tutorials does’t say one line about shaders … but the majority say about standard techniques for handle textures that use only a glTexImage2D and nothing about shaders.

So my remark is very simple : why make something difficult (cf. I have myself to do all the conversion in shader) when this can be very simple to make (with an API that handle this for me) ???

@+
Yannoo

But OK, this can be done on a extern API … but without the hardware accelleration …

@+
Yannoo

why we have to make this with a shader that we have to write ourself and have finally a big program when the graphic card have already all the circuitry for to make this “tranparently” and that we we have only to replace GL_RGB to GL_YUYV in 2 or 3 funcs on an existant program for upgrade it for that it directely handle TV/DVD/MEPGx sources without big modifications ???

1: Because hardware does not necessarily have this. I’d bet that some driver makers are doing a lot of this stuff internally in optimized shaders.

2: Because it’s a waste of their time to do something that we can do easily enough.

The vast majority of OpenGL’s tutorials does’t say one line about shaders … but the majority say about standard techniques for handle textures that use only a glTexImage2D and nothing about shaders.

That’s because the majority of OpenGL’s tutorials are very old. That’s a failing with OpenGL’s tutorials, not with graphics cards.

And cannot this be a good thing that to easily divide the size of data that is transfered via the AGP/PCI/VLB bus to/from the GPU card ???

@+
Yannoo

Personally, I don’t know a lot of persons that know the YUV to RGB conversion formula …

But I know a lot of persons that know how to display one RGB image via opengl :slight_smile:

@+
Yannoo

YUV formats usually contains macropixels. Depending on horizontal and vertical sampling one macropixel cover one or more RGB pixels. There is also packed and planar YUV formats. Packed format have interleaved Y, U and V. Planar formats have all Y valuse, then all U then all V, or all Y then left half U, right half V… too many veriations

It would be really nice to have such conversion, but because it is possible to do on current hw and API to achive same functionality, I doubt that your request will reach driver developers.

Oh… I almost forgot… any chance to get glEnable(GL_SHADOWS) and glDisable(GL_SHADOWS) in opengl? :slight_smile:

Perhaps on the futur, but only when OpenGL can handle triangles such as the raytracing or the radiosity handle this :slight_smile:

Object instanciation is something that can help a lot about this …

@+
Yannoo

Various S(VGA) definitions are too packed or planar … interleaved or not … and this don’t seem to be a problem when I see the evolution from the CGA 4 colors from first PC computers and the HDs definition that we have now … and I speak only about the two last decades :slight_smile:

@+
Yannoo

Yooyo, a macroblock is “only” something such as a 8x8 bloc of pixel (cf. something like we name a tile), such as video contain more that one image, one image contain more than one tile/macrobloc … where each bloc can be treated more or less independantly …

@+
Yannoo

“Only” 4:1:1, 4:2:0, 4:2:2 and 4:4:4 format have to be handle … and they can easily to be converted from one to another.

So the more hard task is to handle the packet/planar organisation, and I don’t think that is too hard task.

@+
Yannoo

Damn, that’s brilliant! We need to get you on the ARB! :smiley: And here all this time I’ve been doin’ it the hard way… :stuck_out_tongue:

Ten quatloos APPLE_rgb_422 is adopted, said the first.
Twenty quatloos it is adopted and deprecated, said the second.
Fifty quatloos the majority moves to abstain, said the third.
One hundred quatloos the motion is set aside, said the first.
Two hundred quatloos the motion is referred to a committee, said the second.
Five hundred quatloos all motions are postponed indefinitely, said the third.

And YUV formats that can be handle can be volontary limited for only handle the 2 or 3 most commons cases … a little support is far better that nothing …

@+
Yannoo

The rasterisation of OpenGL is obligatory for handle things such as the orthogonality on multiple passes … or the sames results on various implementions

Now, pixels shaders can make something such as the raytracing do (it’s the idea, make a “complex func” on each pixel), so we can perhaps a day avoid multiples passes with the uses of shaders …

So perhaps a day OpenGL can loose the rasterisation rules for begin to handle some more advanced techniques such as raytracing (and object instanciation is really a very good thing about this evolution) … but ok this is a dream :slight_smile:

@+
Yannoo