I’ve written elt’s to exploit complex pixel synthetic apperture radar data using c# gui and C++ servers / dlls. I’d love to move into the WebGL world, but if I’m stuck with png, jpg, and tif how do you make this thing work? My images are 4 - 20 GB in size - I have to convert them to RSETs (image pyramids) worth of tiles, and then glTexImage2d these through a tiling scheme - loading only what I need to cover the display surface. I’ve spent days googling for ways to connect a C++ server (running FFTW fourier transform code) via websockets, libwebsocket, websocket++, etc. etc. and haven’t found the way.
If I had to I could dynamically write thousands of 512x512 tiles as individual jpegs (after converting the floating point complex pixel to an 8 bit magnitude), but there has to be a better way. I’d like to be able to generate my tiles dynamically and send them in binary form over a socket to WebGL. That means something has to translate tcp/ip to websockets.
Is there anybody out there doing image processing with WebGL? Any groups / discussions, tips for how to make this work?