It’s been a while, and I’ve been a busy bee and I knocked out an application prototype and it works great (even on relatively modest hardware) to an extent. My app currently easily allows panning (time and space), zooming, scaling, transforming, blending, … and works great, as long as you can fit everything in RAM
I’m working on adding features and revising it to handle all of the new features I’ve been asked to add.
Currently, I can load a bunch of channels of data and I use alpha to blend the channels together. I know that this is not the right way of doing this and I’m currently designing a fragment shader to do proper blending of data channels.
Also, since nobody currently makes an affordable machine with 4 - 10 terabytes of RAM, I am also making a system for streaming the data in, since with the amount of data we are talking about, you very quickly overwhelm any computer’s RAM resources.
I keep mentioning to various computer manufacturers how nice it would be to have a computer with 10Tb of RAM, mostly, but not wholly, in jest, and I get lots of laughs. Seriously, it would be awesome to have that though
Daily data volume is going to be about 4+ Tb, every day the Sun shines in space…
A bit more detail:
The application is to play time series image data (4Kx4K: 4096x4096) as movies at 20Hz or faster. These images are the result of blending one or more channels (up to 8) of 16 megapixel 16 bit grayscale images together.
16 megapixels @ 16bpp per channel --> 32 megabytes/image/channel
16Mpixel @ 30 Hz --> 960 Mbytes/second/channel
16Mpixel @ 20 Hz --> 640 Mbytes/second/channel
And that is PER channel, and we have up to 10 channels…
The data rate is unbelievably high, even for just one channel, and we will usually be visualizing at least 3 or 4 channels simultaneously (2.88 - 3.84 Gbytes/second), so here’s how I plan to cheat:
-have a set of spatially subsampled 1Kx1K @ 8bpp images that can be streamed much more feasibly:
1024x1024 * 8bpp @ 30Hz --> 30 megabytes/second/channel
1024x1024 * 8bpp @ 20Hz --> 20 megabytes/second/channel
When the user pauses the movie, the application will fetch the 4Kx4K images for each channel and display those instead of the 1Kx1K.
For 4Kx4K display, I am thinking about using some Toshiba 8 megapixel LCD displays, if they can ever get them to work in OS/X and Linux.
We are considering using the CGLX cluster rendering toolkit.
The fragment shader that I am working on will be used for performing various arithmetic blending operations between data channels and for colorization of the final image.
I also want to support colormaps via a fragment shader. Ie, upload a colormap as a texture and based on the blended value at a fragment, index into the colormap texture to look up the final fragment color.