Say, “for theory”, that I have a large collection of time-series imagery, 16 bit depth FITS images, 16 megapixels each.
->= 20 frames per second minimum framerate
-adjustable color mapping
Eventually, I might stack and separate several “layers” with alpha.
Anybody have any APIs/toolkits or sample apps to do this already?
I forgot to mention that this equates to a minimum data rate of 640 megabytes per second, and that all of the frames may not fit in memory (4 gigs), so may need to be efficiently fetched from disk.
Jeepers, That kind of image movment is what hi end film production people like pixar etc use and they got some heavy hardware to do it with, 640Mb/s is going to need some raid zero hard disk setups to move the data around around
Ultra-640 (otherwise known as Fast-320) was promulgated as a standard (INCITS 367-2003 or SPI-5) in early 2003. Ultra-640 doubles the interface speed yet again, this time to 640 MB/s. Ultra640 pushes the limits of LVD signaling; the speed limits cable lengths drastically, making it impractical for more than one or two devices. Because of this, most manufacturers have skipped over Ultra640 and are developing for Serial Attached SCSI instead.
And thats the drive flat out maxed so your going to need 2 of them at least, expensive project you got there.
Dont know if any of that was of any help but this one is going to cost you £££, of that I am sure.
Disk space and RAID are of no concern.
We have LOTS of that.
Current project is expected to generate 2+ terabytes per day for 10 years…
I see, hmm i am not sure how OpenGL can really help as what OpenGL does really quickly is all done inside the graphics cards own ram so that would be the limiting factor over just bit blitting from normal ram, have you thought of asking such groups as observatorys as they handle huge megapixel ccd images for things like finding commets etc, and also take a wander over to silicon graphics http://www.sgi.com/ as they have the kind of stuff for handling that.
SGI has had a lot of experience in this (being the original source for GL and then on to OpenGL), and we are using their hardware, I’m just hoping to find something as a starting point.
I already know OpenGL and Performer (been a while since I’ve worked in either), but I’m hoping to hit the ground running and want to avoid writing something from scratch if it is readily available.
SGI has lots of examples of dealing with large texture data that is dynamically paged in (satellite image data, …), but the constraints are different and their data rates tend to be lower.
Some of my cohorts are working on much higher data rates using parallel rendering farms. 200+ megapixels at 60 frames per second.
Could you give some more information about your requirements?
Getting this sort of playback and processing is not a trivial matter. Especially on non-sgi kit.
I work for a company that sells systems and technology that I suspect might meet your requirements (realtime, multistream 4k playback, processing etc).
If you have a look at www.filmlight.ltd.uk and take a look at our BaseLight8 systems, you’ll be able to get an idea of the HW requirements for a system like this.
But, getting the HW together is only the first step on a VERY long road.
Sounds like a fun project though.
ooops, sorry, just re-read your post above mine.
Just saw that you are using SGI kit. Please ignore my last post. Thank you.