"Ray Casting" and Pixel Shading

Hi. I appreciate that quite a lot of discussion about ‘pixel shaders’ has been made on this forum, and the related register combiners, but I have a more specific question to ask.

It’s more a case of “is this possible” and “what shall I look into in more detail to do this”.

Ok, I posted a different question a few weeks back in case this looks familiar. It looks a bit complicated, but it isn’t really…

For part of my final year Computer Science project I need to simulate the firing of electrons (from electron guns situated around the screen) onto a spinning screen. I thought I could do this by perhaps using something similar to ray casting, but it would have to be from multiple sources other than the viewpoint, and instead of querying the object colour, it’ll return the angle at which the screen is to the electron source.

For example, if the screen was flat to the source (at 90 degrees) it would return 0, but if the screen was end-on to the source, it would return 90.

Each pixel in the range area of the electron gun (regardless of the actual position on the screen - this is the complicated bit) will be tested and coloured varying shades of red depending on how high the angle is (assuming the electron send out hits the screen). Each pixel/voxel will be tested with each electron gun firing at the same location, and an average made for that pixel. It will then be coloured accordingly.

Now, to do all of this I’l need copious amounts of maths , but I’m thinking I’ll need to look into Ray Casting and Pixel Shading/Register Combiners. Do you think that what I’d like to do is possible? Do you think these methods are the best way of implementing this? If so, do you know of any really good “Idiot’s Guide”-type tutorials for them?

Thanks for your help.

Ok, I’ve changed my methods a little so it doesn’t (at the moment!) require any ‘ray casting’.

However, I still need to shade in some kind of object per pixel. I realise that it’s not really a question anymore :stuck_out_tongue:

But pixel shading is a definite - and I haven’t yet found anything understandable about how to do it.

You could project a texture from each gun onto a rotating geometry and simply add the results in the framebuffer or additive multitexture. It would be very simple to implement. The exact distribution and color of the texture would be up to you and could match the electron distribution and even vary over time.

For a modulation (say by cosine of incident angle) you would do multipass add in the framebuffer and modulate the texture by the angle of incidence stored as object color (or even use OpenGL diffuse lighting for the calculation with a source ar or along the vector of the emitter for each pass).

Another approach could be similar but volume based but it really depends what you’re trying to visualize.

[This message has been edited by dorbie (edited 01-05-2004).]

Er… I think I almost understand what you’re saying… however in practice I wouldn’t know how to begin coding it!

The more I think about how to do it, the more potential problems/decisions that come up. (and the more the implementation changes)

Ok, over the last few days I’ve decided that perhaps it would be best to create a flat plane the same dimensions as the screen (but non-spinning) to project the pixel colours onto. This would then have to always face the viewer in a billboard effect - however is it possible to fix it’s x and z ‘rotation’ (i.e. if you looked around the display, it would always face you, but if you looked down on top of it, it wouldn’t (it’d just be the top of the plane). Trouble is, the top would be incorrect…)?

The pixel values would be pre-calculated, taking all beam sources into account, so in essence a texture is created (a slight variation on what you said in your first paragraph - with each pixel set fully to red, but with varying alpha levels for transparency), which is then applied to the plane. This texture will of course have to be generated per-pixel. True, this may not have anything to directly do with pixel shading anymore, but I think that would perhaps be the best method to use. The texture would be updated and reapplied whenever changes to the display design were made.

Modulation is unecessary at the moment (possibly!), although the volume based approach may be exactly what I need considering I’m simulating a volumetric display! I’ll have to speak to my project supervisor about this, but in the mean time, where can I find literature/tutorials on implementing firstly per-pixel texture creation (with pre-calcalated values - maybe something to do with Render To Texture?), then the volume based approach (whatever that may be)?

Thanks again for helping me, and I apologise for the profuse use of brackets!