i’m currently working on a system that uses video stream input to extract geometry, and then texture that geometry with the video frames. the trouble i’m having is that in order to do this, you must generate the textures on the fly. when i call glTexImage2D to generate a texture from my video camera, it takes roughly 1/5 of a second to create the texture (a 768x480 image being placed in a 1024x512 texture – not rescaled, just placed in the lower left corner). this is just much too long, as it means that the max frame rate the app could sustain is 5fps, and that’s if the rest of the program did nothing (which it obviously doesn’t!). does anybody have any ways in which i could speed this process up – taking a stream from a video camera and creating textures in real time? any thoughts would be greatly appreciated!
Use glTexImage2D the first time you generate your texture and after that, just use glTexSubImage2D to update your texture with the new frame’s data.
Use GL_NV_texture_rectangle if you’re on NVidia hardware on Windows or Linux, or GL_EXT_texture_rectangle on Mac OS X. (GL_EXT_texture_rectangle may be faster on ATI as well, don’t know)
Use GL_NV_pixel_data_range if you’re on NVidia hardware on Windows or Linux.
Use GL_APPLE_client_storage if you’re using Mac OS X or one of the Mesa drivers that supports it.
Use GL_APPLE_texture_range if you’re using Mac OS X 10.2 or better.
As the previous poster said, use glTexSubImage2D rather than glTexImage2D.
Make sure your image buffer and rows are sensibly aligned for your architecture (32-byte aligned is probably good enough)
I agree that you should glTexImage2D with a NULL pointer to size your texture (1024x512, for example), and then use glTexSubImage2D to actually upload the data.
However, I think your problem may be this: Make sure no format conversion is taking place!
If you have data in B,G,R,A format in memory (which is usually optimal on x86) then you should make sure to upload it with GL_RGBA8 internal format (or GL_RGB) and GL_BGRA,GL_UNSIGNED_BYTE data format.
If you don’t have alpha, then make sure the data is still padded with a byte where the alpha would go, so each pixel is still 32 bits.
If your data doesn’t follow these rules, the driver will very likely have to re-pack your image into the format the card needs, which will take a while per image.
Also make sure you have working AGP drivers for your chip set (and an AGP graphics card, of course)!
AGP 4x can do 1 GB/s, which is about the amount of bandwidth you can get out of PC133 memory, if your CPU doesn’t also use it at the same time. PCI, meanwhile, can only do a measly 133 MB/s.
…at interactive rates
finally; someone who doesn’t use the clumsy and wrong term “real-time”.
thanks for all the advice! so in switching from glTexImage2D each frame to using glTexSubImage2D, the time to update each video texture decreased from 0.2 seconds to 0.1 seconds. certainly a huge improvement, but i’d like to try and reduce it further. right now, the pixel data that’s being passed to glTexSubImage2D is in the format GL_BGR (what the camera provides). you mentioned converting it to a 32 bit form (GL_BGRA), but i’m not sure what the fastest way to create a new buffer and pad the data with alpha values would be. is there a quick routine for performing this operation? also, just so u know, this is being done under linux (slackware 9) and the vid card is an nv15 geforce2 ultra (bladerunner). being new to linux, i’m not sure how to determine the agp bus speed on the system.
someone who doesn’t use the clumsy and wrong term “real-time”.
There is, technically, a difference between “interactive” framerates and “real-time”. A GUI running at 10fps is still “interactive”, because the display refreshes fast enough to allow the user to reasonably interface with the UI.
A “real-time” situation is one where large changes can happen in a small amount of time. In, say, Quake, you can be killed by an enemy who wasn’t on-screen 1/10th of a second ago. You could have avoided that attack had you seen it in that 1/10th of a second. Ergo, 10fps is not sufficient to convey sufficient information to play the game. You can still interact with it, but a great deal of information is lost.
In general, I consider “interactive” framerates to be anything above about 7-10fps, while “real-time” would require 30 or better.
The term real-time is not clumsy or wrong, but it’s often misused by people. It does mean something quite specific w.r.t. high update rates, low latency and deterministic response. Exactly what is acceptable within the bounds of those parameters depends on your application and circumstances, for some non graphics applications updates are measured in kHz and respones in microseconds. For graphics applications opinions differ but if you’re rendering at 60 Hz with latency < 50ms and equally important you have determinism, meaning your frame rate isn’t up and down like a yo-yo but it is almost guaranteed that you have a new frame every 16.7ms then most people would agree you’re doing real-time graphics.
Others have a more relaxed approach to these requirements particularly w.r.t. determinism, and some call this ‘soft real-time’.
‘Interractive’ is probably whan most computer games are, especially with the emphasis best average fps performance rather than most consistent gaming experience, but real-time algorithms are applied in implementing these systems.
Not wrong and not clumsy, real-time helps describe something that interactive does not.
I’ve worked on real-time texture paging for example, where texture loads and subloads are profiled on the target platform and spread across several frames. Run time graphics performance can even be measured to determine how much downloading can be done while still making the target frame rate and strict caps are placed on the total ammount of texture downloading each frame, estimated in fractions of milliseconds. That of course is just the immediate graphics rendering side of things, there’s overall data processing and file system management before any of this happens.
Yep if you’re sloppily throwing textures at your graphics card then you’re probably interractive, but if you’re put the effort into implementing a decent system you might be rather proud of the work you’re done to help make your texture paging suitable for real-time use.
The term real-time is not clumsy or wrong, but it’s often misused by people.
When I said clumsy and wrong, I meant “not used correctly”; ie. someone inadequately wielding a technical term.
Real-time means the computer is in a sense synchronised to a clock on the wall: ie. the computer is well aware of time passing in the real time frame. It doesn’t mean that the computer (as a combination of hardware and software systems) does something quickly enough so the user believes their control has an immediate impact on the system. That case is talking about interactive framerates.
Real time systems—a term that Windows doesn’t belong to—must, by definition, provide a bounded delay on response. To use a graphics analogy because you’re talking about texture paging, that means that if the application binds a texture under a real time system then there is a guaranteed upper limit on when that texture will be available. The impliation being that if it isn’t then something seriously bad is going to happen. That time frame is irrelevent in a real-time system (because it’s a matter of scale). Quake isn’t real time because it doesn’t run on a real-time o/s. 30 frames per second doesn’t say anything about the underling real-time structure—it’s just a nunber.
Hard-real time systems belong to systems like the electronic control of a train. If the s/w wants to raise the ‘apply brake’ signals then there MUST be an upper bound on when the system can prove the brakes WILL BE APPLIED. This is factored into the system train control s/w as a margin of error. For example, if the system can guarantee that the a signal will propograte from the CPU to the wheel-port despite any perceived cache misses, page reloads, other interrupts that have to be serviced, etc etc, then the system can work out how far the train will travel at its current speed in that window and thus the upper limit on the distance it needs to think about applying the brake if, for instance, the switch is incorrectly set.
Windows isn’t real time. The number of times it is easy to take control away from the system and never give it back is too numerous.
So, my point is: real-time is a term misused… or, used clumisly to attribute a property to something that is not real-time.
links: QNX, a real, real-time o/s to control manufacturing plants (and other thinsg): http://www.qnx.com/
It is my experience, as an engineer with real-time defined as john just described, that the term real-time has been redefined/misused by the cg community to not apply to signal propagation/delay times but instead tied to human visual perception. And by this, it has made it a very subjective(for good and bad) term compared to the original(?) objective definition.I have read somewhere that interactivity is considered 6fps and above. Real-time(in the cg context) could be just about anything, depending on who you ask.
Ah, so you do have a clue.
Yep I know about Windows vs a hard real-time systems, I used to make that argument myself, I gave up years ago. I used to fix tricky real-time issues with IRIX-React & complex multipipe Performer configurations (buy me a beer and I’ll tell you the one about the guy who was using unix pipes for ‘real-time’ IPC on a beautifully engineered ccNUMA system).
I think it’s possible to aim for soft real-time implementations on Windows. I’d agree Quake wasn’t real-time but not merely because it runs on windows. It’s fair to shoot for real-time behaviour in your software even if you are running on windows. It’s also fair to make some qualified claims of real-time for your software, it’s certainly true that interractive does not do some systems justice, even if it’s running on windows.
real-time is not always misused by the cg community. In my experience the use of real-time in cg started out in simulation where real-time meant something. This is an entirely CORRECT use of the term. It was a guarantee you’d make your frame rates, with specific transport delays through your entire system. Requirements were often tight and any dropped frames meant failure, this was not a grey or flexible area, frame rates were important and graphics was often locked to a real-time simulation host with sync pulse generators and data communication using solutions like for example reflective shared memory like SCRAMNet to reduce latency and increase throughput. Real-time has never had generally applicable numbers associated with it, it has a meaning but my drawing at a guaranteed 60 Hz with fixed latency is just as real-time as some embeded controller updating several kHz with a guaranteed response to events, both are real-time.
This term was initially misappropriated IMHO by people selling ig systems on windows based PCs, who needed to make the claim of real-time for their systems but didn’t deliver. Phrases like ‘soft real-time’ came along help draw a distinction. They still used clever algorithms for some stuff but half the problem was the OS, the other half was the systems were cheap and less rigour was applied to measurement and acceptance testing in many cases. (gift horse - mouth etc.)
In addition to all this there are algorithms and techniques used in implementing a real-time visual system which are generally applicable to other types of 3D graphics, I think it’s fair to refer to these as real-time techniques.
[This message has been edited by dorbie (edited 06-18-2003).]
With Linux improvements like better scheduling and releasing long held locks, and even with decent software on a well configured windows systems there’s a lot you can do on these systems. ‘hard real-time’ no (debatable in the case of linux, like I say it depends on your requirements), ‘soft real-time’ yes and more than good enough for almost any application formerly requiring real-time graphics.