I am currently writing a voxel landscape engine that can also do more advanced things like lightening and can load objects.
However the problem I have is processing power. I already make use of SSE and 3DNow and have tuned my stuff carefully but its still too slow
My question is how good do modern mid-class boards perform (e.g. 6600), I had a look to GLSL and it seems to be quite cool. vector-calculations, maths, arrays everything is there.
But can it be used also for different things than shadding like computation whole frames?
Where are the bottlenecks dosing so?
Thanks in advance, lg Clemens
You might want to take a look at the Sh metaprogramming language for modern programmable GPUs.
There are also other libraries available for free on the net.
Oh, and by the way, you really can do other stuff than rendering with GLSL. For example, I compute the irradiance on a surface with a pixel buffer, floating point textures, and some vertex and fragment shaders.
Thanks a lot for your reply and thanks for the link!
However it would be great if you could anser me some related questions:
1.) The only thing that is not 100% clear to me is, why should I use thins special language, GLSlang seems to do exactly the same?
2.) Is there an efficient way to share big data-sets (lets say big Vectors) between GlSlang?
The only way to share big data I know is using textures…
3.) Is there another way than textures transporting data to and from the shaders into the user-space part (C+±Program) than textures?
Thanks in advance, you helped me really a lot!
Textures are, by far, the highest-capacity and highest-performance route for transferring data to and from the fragment processor.
As BlackBox said, you do not need the Sh library to do your things, but it can give you a good idea of the things a GPU can do for you in terms of computing.
The textures really are the best way and the most efficient way to process data on a GPU. You could use Vertex Buffer Objects (VBO) to store large arrays of float directly in the GPU’s memory, but the only way I know of for retrieving data from a GPU computation is through textures.
Also, if you want to implement algorithms which require 16 bits or 32 bits floating point precision, you will need a pixel buffer.
If you are interested in implementing algorithms on GPUs, you might want to take a look at stream mapping.
I hope it helps a little more.
Thanks a lot for all the usefule infos you guys prvided me!
I always thought textures are slow because opengl’s reading/writing into textures is quite poor - but of course - GPUs nearly do all of their work on textures so they are optimized.
Although till now I don’t know how I also could use vertex shaders for my calculations I will play with pixel shaders a bit and see how fast they are.
Thanks a lot and good luck with your projects!
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.