I just aquired a project to do visualization in WebGL. My experience is some years C++, OpenGL and GLSL. I did some reading and have a basic idea about what’s possible and what not.
So here is what the WebGL widget should be able to do. That picture is from a MRI viewer i wrote a couple of years ago
So there’s nothing too fancy. Just some geometry, colored fiber tracts as lines or tubes, textured slices, interaction has to be decided when I know what’s possible and what is not.
So here are my questions.
I already know there’s no 3D Textures. That would mean about 160+200+160 2D textures for the 3 directional slices. How would WebGL handle such a large number of textures.
Is there some sort of native transparency? I guess vertex sorting or multipass rendering is probably not an option?
How is picking handled. Do I have to keep like my own scene graph and calculate intersections myself?
Even when downsampled I guess the raw data for a scene like in the picture is something between 20 and 50 MB ( being a bit liberal with the numbers here ). That’s quite a big junk that has to be transfered to the browser. Ok that’s not a question
That’s it for now. I’m sure more question will come to more I learn. Thanks in advance for any answers.
WebGL is just a wrapper for the underlying OpenGL, OpenGL-ES or Direct3D implementation - with the restriction that only features that are available in OpenGL-ES 2.0 are provided.
The amount of available texture map memory is the same as for a desktop application running on the same machine - with the previso that the browser itself uses OpenGL/Direct3D for compositing, so it’s going to take a small amount of texture RAM. If your application ran within memory limits on some particular PC under Direct3D or OpenGL - then it should run just as well under WebGL.
Obviously if you expect this to run on a cellphone - then you’re going to have to be very careful about texture consumption.
Multipass techniques are certainly possible in WebGL - I’ve done multipass shadow rendering and also light blooms and such like. There is one notable restriction: There is no ability to render the depth buffer to texture - so if you need to use depth in a texture (as I did when rendering shadows), you have to do the trick of rendering your geometry one extra time with a shader that chops the per-pixel Z into RGB data.
Picking is not implemented in the API…you’ll have to do that in software somehow.
As for your data, the “20 to 50MB” would obviously have to be downloaded - but with a modern (say) 100Mbit/s broadband connection (that’s what I mostly get at home from Time/Warner cable - and that’s achievable on a “true” 4G network on a cellphone), that could take between 1.6 and 4.0 seconds…not likely to be a huge deal. But if your users are on dialup or on 3G cellphone - it’ll be a lot longer (on a 19.2kb modem, you’d be looking at something like 6 hours for 50MB - on a 3G cell, it’ll take 18 minutes).
That’s interesting. Is that code or an example/tutorial available?
Sadly, quick sort isn’t the best algorithm for sorting translucent polygons by range. Since you have last frame’s sorted list available to you - and for “normal” eyepoint motion, the order of the polygons doesn’t change much from one iteration of the code to the next. Ideally you need an algorithm with O(N) performance for already-sorted data. You’d also want an algorithm that doesn’t keep swapping the order of two equal items because that’ll cause flickering in your image.
But do you need to sort at all? Don’t you have a-priori knowledge of how these things should be rendered given the camera position? Very often with this kind of situation, you only need the data to be sorted in one of four or eight orderings - which you could do offline or just once on startup. Then at runtime, you’d maybe only need to decide which octant the camera is in compared to the center of the skull and regurgitate the polygons in one of the 8 pre-sorted orders. If that’s a valid approach, it would be lightning fast.
As for the post-effects stuff…sorry, but I can’t give away any code.
That’s an approach I certainly gonna try out. But as you can see in my picture the inner surface is randomly folded so there are different overlappings from different angles even when staying in the same quadrant or octant.
But as I said I’m still on my fact finding mission and if transparency is too much of a hazzle I might be able to skip it.
…the awful ASCII-art images towards the bottom of the page explain why depth sorting doesn’t work when polygons are in close proximity. The problem boils down to the problem that polygons are not points - and whether you sort by nearest vertex, furthest vertex, centroid - or ANY other criterion - you’ll still have polygons rendered in the wrong order.
Why are you doing that anyway? Wouldn’t a dense set of parallel planes stacked around all three axes be the simplest representation? Render whichever set is closest to being at right angles to the line of sight.
If the polygons aren’t moving relative to each other, you could write an offline BSP tree generator. Rendering dense sets of polygons from a BSP tree is a lot faster than sorting - and it avoids the problems I described in my FAQ: