Video Input inside GL, is it possible?

I took a fast look at the recent topic about uberbuffers, render-textures and stuff.
One thing hit my mind and I remembered a thing I heard some time ago wich was obviously a rumor since it never got implemented.

It was about an extension to get Video Input in GL. I can hear everyone saying this is not the place for it but after watching at how much HW and drivers evolve, I would like to know, is it really so difficult to do that?

I mean, to me it looks like there could be a TexParameter flag which tells if the texture is to be aquired from video input.
Something like automatic mipmap gen after all.
Sure, there are some semantics to be fixed. Render to texture is not valid, TexImage yelds to undefined results and other blah I can’t figure out.
I can see there are issues. Buffering for example. Syncronization. Colorspace conversion is another weird thing, but I don’t think the extra computation is really a problem and I think I saw an NVX extension to use directly a different texture internalformat. Sharing of the input port, which is also a very sensitive topic for security reasons.

Could someone please explain me what are some reasons to not do that? Possibly it has been implemented by a professional video card (some 3dlabs cards have tons of strange extensions I’ve never really took in consideration)?

I realize the lack of request could be a reason to not implement that but I still would get some comments. The thing has been floating in my mind for quite a while now.

Thank you!

Doesn’t OpenML tackle this problem?

BTW, what is the impl state of OpenML? When should we expect a real working driver for existing graphics and say video input hardware?

I think you have to be a bit more specific about what you want unless you’re just throwing ideas out there. Something similar to your suggestion has been done, with video hardware writing directly to texture memory, other requirements have led people to simply read video from disk decode in system memory and send to texture memory.

It really depends on your requirements, yes OpenML is designed to support this kind of thing, exactly how broad that support is and what features(and hardware) are supported I have no idea, you can do a lot with video and some of it needs very specialized hardware and APIs.

Those are just ideas in my mind for quite a while.
I don’t want to get decoded video but just video from a port… since some cards have on-board ViVo and output is automatic, it seemed to me pretty strange they didn’t wanted a way to use the input port by easy means. Looks like untapped functionality.

OpenML as long as it is supported, could do the trick, I’m just asking. I know it’s a crazy idea but now we have GPGPUs… tell a 3dfx engineer which has been hibernated you would like to have ARB_fp functionalities. He’ll think you’re crazy at least.

I say, if a video card has video input functionality on board, then immediatly after the ADC the ‘image’ is uncompressed, possibly in a specific image format but there could be possible to just copy it to texture memory without passing the AGP and staying on board.

All in all, this is just a random blah, but my idea is something like this:
1- Make a texture as always.
2- Instead of calling TexImage, call, say TexParametri(TEXTURE2D, TEXTURE_VIDEO_INPUT, TRUE). Looks like automatic mipmap generation.
3- Render just as you would do with a standard texture. The image will be updated automatically, possibly lagging behind some frames - no sync, no nancy features, nothing.

Some video formats are encoded and decoded in hardware. I guess every decent card with ViVo encodes/decoded Mpeg2 in hardware - they decode video even when without ViVo.
In fact, they don’t need to encode what they get from the video input at all. Considering this, I am not sure it really needs a whole external API. By the way, I knows drivers are getting larger and larger so they could add this. They could also add a way to overheat the GPU so you can do cofee on it for the same matter.
I mean, the input is on card. Inputs are on card. Why it is so difficult to move it to a very near location which is texture memory?

Let me mark again: this is random blah. A crazy idea I heard some time ago.