v4glCreateConvertor have two params.
First param is input fourcc and second param is what texture format you prefer for later usage. Fourcc description define pixelformat and sampling which is enough to build proper shader. API should be smart enough to choose proper rgb/rgba texture to handle 8bit or 10bit formats.
What you actually want… to do direct conversion from one fourcc to another fourcc, or input fourcc to rgb then to output fourcc?
I have another thing to resolve… which OpenGL version to use? If someone wants to use my lib in pure 3.2 context then I cant use glBegin/glnd calls and I have to switch to vertex pointers and glDrawElements/Arrays calls.
Also GLSL shader syntax is changed between gl versions so I have to create shaders with respect of choosen version rules.
I want to input with a fourcc codec and output YCbCr 4:2:0, DXT, S3TC or MJPEG compressed frames textures.
Cf. use various avcodec/v4l/raw video datas as input and output OpenGL compressed 2D or 3D textures on the output.
This can be see such as a pipe that can decompress/recompress a video stream into one OpenGL texture unit “on the fly”.
So, we can combine texture units between them in the GPU and handle a lot of specials effects on the OpenGL window output from multiples videos streams/files in input.
Direct fourcc to fourcc (like YUY2 to NV12) conversion using OpenGL is pointless. Better use OpenCL for that.
Conversion fourcc to RGB/RGBA and back RGB/RGBA to fourcc have more usage… For example you want to output 3d graphics to external video renderer (like matrox or black magic professional video card) using YUY2 or YUYV pixelformat.
At this moment, API and GPU cannot handle MJPEG compression using shaders. OpenGL is graphics rendering API… it is not image compression API. But… MJPEG might be possible using CUDA or OpenCL. OpenGL and OpenCL/CUDA have interoperability features (to use textures and buffers between OpenGL and CUDA/OpenCL)
But I have already a really very fast YUV 4:2:0 to RGBA 4:4:4:4 conversion with my fragment shader, and I find it already very short and trivial to use
So make the inverse that convert a RGB picture to a YUV picture doesn’t seem to me a too hard task …
On other side, a library that can make the conversion seem to me too effectively a very good thing for to be the more generic possible (because alls successives optimisations into the library are directly shared by alls programs that use this library).
I test tomorow if OpenCL can be handled on the EEEPC plateform
(CUDA certainly don’t work with an INTEL video chipset but I make perhaps one mistake to think that).
I see/work on other side about something that can make, in real time and on the EEEPC plateform, the compression/decompression of something like DXT1 but with the use of 8 bits intensity (on the Y plane) and chroma (on the Cb and Cr planes) instead of two RGB 16 bits colors in a packed format (and note that the planar YUV 4:2:0 format divide already the packet RGB24 4:4:4 format size by 2)
=> the compression factor is about 800% and limitations about the number of colors used in a block that occur in DXT1 is very very reduced with a YCbCr planar format … and 64 “intensities” of YUV per bloc (Y, Cb and Cr planes are planars, where the fragment shader RGBA output is packed)
==> and hop, I see already the video Groups Of Pictures directly stored in a sort of “temporally and spacially compressed texture format” that is more little than only one picture into the RGB24 format
==> I propose a new GL_COMPRESSED_YUV420_GOP_YLP texture format token for that
Very thanks for your help Yooyo
PS : I don’t like the 3.x versions because I think that this give more incompatibilities at the end that others things …
(and I’m affraid to see that OpenGL has begin to become something like the DirectX/Direct3D interface … where the major part of the code is for handle very various implementations but not the algorithm …)
PS2 : and don’t understand why glBegin/glColor,glTexCoord,glNormal,glVertex/glEnd can’t trivialy to be emulated in the hardware driver via vertex arrays in the last OpenGL versions
(because it’s very easy to do … via #define glVertex3f(x,y,z) vertexarray_tab[vertexarray_size++]={x,y,z} and likes this for example)
=> it’s as saying that we haven’t the law of to walk because we can now use a car or a fly
Listen to your words: RENDERING, this implies making textures visible. Having support for color models is core stuff. This ain’t about high-level API features. Just a Texture format support, is part of OpenGL as a rendering API.
Actually it would be nice if you could go lower. Being able to define a texture format and conversions to and from RGB. Wasn’t OpenGL about having the building blocks, thus enabling the maximum amount of possibilities? I thought it was.
Yes Gedolo, block building support at a lower level (cf. at texture format level) is exactely about what I think.
Block building seem already to be internaly used into the hardware (cf. JPEG/MPEG/DXT pictures are already stored/used in a block manner), so I think that this type of texture format can “easily” to be exposed into the OpenGL API if the block structure is limited to 4x4, 8x8 or 16x16 block sizes for example.
On the other side, the same task can certainly to be make into vertex shader and with a texture support of 64, 128 or more bits per pixel …
(for example, the DXT compression/decompression only use 64 or 128 bits for to handle a 4x4 block of texel)
The YUV(or YCbCr) <-> RGB conversion isn’t really very difficult to handle, I only found that this spend some precious lines into vertex shaders that are already limited in size
(about 50% of my vertex shader is here “only” for to handle the YCbCr to RGB conversion => with an hypothetic GL_YCbCr420 texture format the size of this shader can become as little as only 5 or 6 lines … and where the majority of the lines are only here for to handle in/out parameters )