Cameras

I’ve spent two weeks developing a one-page piece of code that should’ve only taken me (if I had my way) a couple lines to implement. So here’s my suggestion; camera (ie webcam) functionality.

A simple, quick function for enabling streams from a camera, for the purposes of taking snap-shots (single frames) and entering them into a texture. Input would be the pixel width and height of the desired texture, as well as enumerated variable to indicate what type of pixel shading / smoothing is preferable. Return would be the id of the texture.

To extend it, how about, after initializing and storing the first snapshot to the texture, to include a secondary function that, when run, updates the texture with a new frame from the camera. Input would be the texture id, and no output.

The reason I’m asking for this is that, with the code I’ve created, it requires several function calls to OpenCV functions, which, while perfectly fine for what they were designed to do, slow the framerates of any subsequent OpenGL objects. To implement this idea so that it is optimized for video-cards, while cutting out the unnecessary bells and whistles for OpenCV, might go a long way to increasing camera functionality.

If this idea doesn’t sound feasible, then the very least I would ask for is a wrapper or function to make an object of type ‘IplImage *’ a texture, without having to mess around with the minor details.

theUnknowing
My opinions are free, and worth every penny.

This is an application with a very narrow usage. If you want a function that loads a “IplImage” as a texture, then you can write one. It isn’t the responsibility of OpenGL to support WebCams, Digital Cameras, streaming video, or anything of that nature. OpenGL is about rendering 3D (and some 2D) graphics.

In the past, this has been implemented using window system specific extensions. SGI had SGIX_dm_buffer for example. The capture part isn’t terribly interesting from an OpenGL perspective.

However, streaming compressed video (from any source) into textures is interesting. Right now most video hardware accelerates decompression of MPEG video to one degree or another. It makes sense to expose some way to stream a DVD into a texture, for example. I think it will still be awhile before that happens, though.

[This message has been edited by idr (edited 02-05-2004).]

OpenML?

its funny, today i asked someone if i should bother to implement video textures and we decided it isnt useful enough to really do it. applications would be to render screens, cinema, video billboards etc… extending that to webcams might be nice for multiplayer games. though i admit its still kind of rare to require streamed textures the above mentioned hardware support for mpg would be a nice thing to save a little bandwith. but then, i’d agree that opengl has nothing to do with that part, so being able to quickly update the texture should be good enough and everything else would be done by other libs or openml.

However, streaming compressed video (from any source) into textures is interesting. Right now most video hardware accelerates decompression of MPEG video to one degree or another. It makes sense to expose some way to stream a DVD into a texture, for example. I think it will still be awhile before that happens, though.

I need that! I have made a player app for TV broadcast and my bigest problems is texture uploading speed and field control of TV out.
On P4 2.6Ghz and FX5900, AGP8x I got 522MB/sec and this is ~AGP4X. Also it will be nice to get better control of vsync, like
glGetTimeToSwap(), or some wgl/glx functions that can use some OS syncing objects like events or critical sections.

It will be nice to use hw accelerated MPEG2 decoder. For example, add new compressed texture source (MPEG2 I,B,P frame) and upload
compressed frame using glTexSubImage2D.