Thanks Dark Photon
I have now see VdpVideoSurfaceGetBitsYCbCr, VdpVideoSurfacePutBitsYCbCr and others VdpOutputSurfacePutBitsYCbCr funcs specs, this seem to be about what I want
Where can I find a complete but simple and fonctional sample/tutorial that use this in C/C++ ?
Because the pseudo-code seem cool, but I don’t really know how to compile it in gcc or g++
=> I want to read frame by frame a videofile (the file format can be .avi, .mpg, .mov or /dev/video for example) and output each image in a “compressed but standardised and user-friendly internal format” into a queue on memory (so, where I can easily and fastly decompress one picture, modify it and rewrite it in a compressed format, all this “on the fly”).
One thread fill a frames queue when it read a videofile and we have multiples others threads that can read this frames queue (and/or make a mix between multiples queues in input, and output this mix into another pictures queue, or display directly it on numerous and various 3D OpenGL shapes that are video-texture mapped).
For this instant, this is only for SD resolutions (CIF, QCIF and others 4CIF) on 1 to 32 bpp surfaces (B&W to RGBA8 with the YCbCr format between), but I’m for to have support for 9CIF/16CIF or HDR versions such as 1920x1080 or more in multiples views and in float or double formats too
I want to display/stream “not too slowly” something like four or five audio/video streams/files (or a lot more like dozens if this is possible) on a little netbook such as a eeepc, an iPhone or a PDA …
At this instant I can only handle two ot three littles video streams on my eeepc, but with a lot of difficulties (I have to volontary loose somes frames for to have something that work) and CPU/RAM consommations that are really too hights
(and my PDA doesn’t seem to like this when I test multiple video streams displaying with it and this work perhaps/certainly with the iPhone but I haven’t found the time to work about this implementation )
But on other side, I can already handle more than a dozen of littles videos streams in // on various CoreDuo plateforms (such as recents PCs, iMac or Mac Mini) with V4L(2) and/or libavcodec, so I find that it’s not as bad as it
(on the iMac plateform, I can already for example fill the HD screen with a lot of SD avi/mpeg/raw streams and resize/zoom/scroll/rotate/mix/… independantly each video stream display in real time … but I haven’t the /dev/video support for the webcam with MacOS because this seem to be a “Linux only” feature)
And I dream about that this can work “very well and speedly with HD contents” on a very little computers farm (with two or three CoreDuo plateforms for example), from the client/server and network point of view too