distributed computing project

I don’t want to give up, but I need anyone’s help because I don’t know anything of OpenGL. I have never programmed it before!!

So I’m really confused!!

If somebody with advanced knowledges can help me, please post here and/or send me an e-mail.

Thanks everybody !

It’s me again!

Just some topics I’ve found in others distributed computing projects about this:

Seti@HOME
--------- http://setiathome.ssl.berkeley.edu/bb/bb4/bboard.cgi?action=viewthread&num=2226

Folding@HOME
------------ http://forum.folding-community.org/viewtopic.php?t=2135

Jonski, if you have an other opinion than me, you can also say that in a nicer way.

I don’t want to give up, but I need anyone’s help because I don’t know anything of OpenGL. I have never programmed it before!!

What I wanted to say with my post was not that it is generally a bad idea to do other calculations with a GPU, but it is a bad idea trying to do it without any understanding of 3D graphics.

I don’t think you can just learn how to use a GPU without learning OpenGL and 3D graphics programming. Learning to use a GPU for calculations isn’t as simple as learning an assembler language.

nlopes:

My advice is, look for some tutorials on OpenGL (search for NeHe) and look into advanced 3D graphics (at the nVidia site are a lot of examples). Try to understand some simple GPU algorithms like bump-mapping, …

Then, with a basic understanding of how a GPU works, it will be a lot easier to write other applications that compute something on a GPU.

[This message has been edited by Overmind (edited 04-29-2003).]

Originally posted by Overmind:

Learning to use a GPU for calculations isn’t as simple as learning an assembler language.

Are you joking?? Assembler is really dificult, and you are saying assembler is easier than this?? Oh my God!..

No, you got me wrong.

It is not exactly more difficult, but you have to learn more than a language. You have to learn an assembly language (that of the GPU) and you have to learn the way a GPU works, how to deliver data to it, how to process the output and so on.

A GPU is designed for graphics, that means you can’t just write a program and let it process the data, you have to send the data as 3D coordinates and recieve pixel data as output. That doesn’t mean the coordinates have to make sense in 3D space, neither does the image have to look like anything, but thats just the way a GPU works. It basically takes a set of coordinates and produces an image. The difficulty is not how to write a program for a GPU, but how to forulate an algorithm so that it is suitable for the way a GPU works.

A CPU executes a large stream of instructions on some random accessible data, a GPU executes a very limited stream of instructions on a huge sequecial accessible stream of data. Thats just completely different, and before you start designing your own algorithms for a GPU, you should learn how it works. And the easiest way to do this is by learning 3D graphics, because there are many existing GPU algorithms you can study.

Btw.: It is not absolutely neccessary to learn the assembly language of GPUs. Have a look into Cg at the nVidia homepage. It is a C compiler for GPUs. But as I said, the language only a part of the problem.

Originally posted by nlopes:
Are you joking?? Assembler is really dificult, and you are saying assembler is easier than this?? Oh my God!..

I can see how you might think so. Unexperienced programmers usually find an assembler like language to be hard to learn just because it uses a “difficult” syntax. But infact, any assembler language is fairly easy to learn once you understand the basic architecture of the cpu.

To implement an algorithm,usually targeted for the cpu, one would need to understand all aspects surronding it. Some of the key concepts would be:

-In depth knowledge of the algorithm in question. In mathematics we usually rewrite an original algorithm in many different ways to be able to “apply” different known techniques to solve it.
-For GPU implementation. Extensive knowledge of 3d graphics hardware/pipeline, so we know what part of an algorithm goes where. There is never just one solution. Many parts of the GPU can be used to derive/aid in obtaining the final answer.

If you know these two areas well, you are ready to go!

The tools available to you come in many flavors: DirectX,OpenGL,C/C++,assembler etc. You choose the tool most appropriate for the task. Many people on this board will help you for free, if you are ready to accept it.

I am only concerned that some “driver tweaks” for instance Gamma settings MAY affect the result of certain calculations.

Since this is something that may change from system to system the results are not guaranteed to be the same.

As far as assembly is concerned. Its the only “what you see is what you get” language and hence the only “reliable” language since a compiler is just a stupid piece of software.

Originally posted by HS:
[b]I am only concerned that some “driver tweaks” for instance Gamma settings MAY affect the result of certain calculations.

Since this is something that may change from system to system the results are not guaranteed to be the same.

[/b]

True, but by doing all work in a linear space(CIE XYZ), and at the end map to a monitor specific RGB space followed by the non-linear gamma transformation, you could minimize this behaviour.

Right, but can you guarantee that every hardware will provide the same exact results independed of the system or settings?

Unless that is proven, its nice but useless for scientific calculations (floats are VERY inaccurate anyway).

Originally posted by HS:
Right, but can you guarantee that every hardware will provide the same exact results independed of the system or settings?

With today’s many different types of shaders and precision, I would say no. Unless you would limit yourself to only support a specific architecture. Let say NV30’s fragment programs(FP32).


its nice but useless for scientific calculations (floats are VERY inaccurate anyway).

I wouldn’t go so far as to call it useless, but it is of course a limitation. But if you are aware of the limitation and can accept it(and measure it), I dont see any problems.But what do I know.

Originally posted by roffe:
[QUOTE]I wouldn’t go so far as to call it useless, but it is of course a limitation. But if you are aware of the limitation and can accept it(and measure it), I dont see any problems.But what do I know.

Hmmmmmm, good point, I guess I just projected my needs. However they are very few areas I know (besides graohics), I would consider floats beeing accurate enough.

The answer to your questions might be here: http://www.cs.caltech.edu/courses/cs101.3/

There are also some GPU oriented papers at SIGGRAPH this year: http://www.cs.brown.edu/~tor/sig2003.html

Instead of using OpenGL, can I use direct instructions to the gpu?
How can I do this?? In gcc how can I send commands to the gpu? And where can I learn about this??

Thanks all!

Originally posted by HS:
As far as assembly is concerned. Its the only “what you see is what you get” language and hence the only “reliable” language since a compiler is just a stupid piece of software.

So you never use any high level language? It’s MIPS, SPARC, and x86 for you, eh?

If you’re that worried about it, you shouldn’t trust hardware either. Hardware does have the occasional flaw…

Originally posted by nlopes:
Instead of using OpenGL, can I use direct instructions to the gpu?
How can I do this?? In gcc how can I send commands to the gpu? And where can I learn about this??

You’re basically talking about working on the driver level here. Not only is the coding far more complex, but getting your hands on driver code would be difficult.

Additionally, (alas) you still need to understand GRAPHICS to use this approach. If you are set on using GH to do computation, you’re going to have to bite the bullet and learn an API (and, gasp maybe learn some graphics!).

nlopes, at one level can just about do this, but unfortunately for you to feed input to the card and get it back as well as set up registers in the conventional sense you must learn graphics. GPUs still operate with fairly restricted pipelined streaming structure:

state setup, texture & vertex streaming -> programmable vertex -> programmable fragment -> raster operations

Currently there is a clear distinction between each of these stages and two of them are entirely fixed function style OpenGL. Each programmable stage uses significantly different instruction sets, mainly related to data types and matrix support. In addition the business of getting your data in and out of the GPU and retaining persistent data verges on the mind bending.

Computational algorithms must often implement multiple programs & do things like store results to texture, read back from framebuffer, send data in textures or vertex arrays with complex packed formats do dependent texture reads, worry a LOT about precision and internal casting. These products are just not general purpose enough for someone to come along and simply program them in the way you hope. An instruction set knowledge and ability to program is useless without a thorough understanding of the architectural issues surrounding OpenGL and the place of the programmable components within the fixed function pipeline. In addition product specific knowledge beyond the OpenGL spec is desirable as it relates to restrictions on program lengths, supported texture counts and internal precision and performance characteristics of target platforms.

It’s getting better all the time but things will probably never be so general purpose that you can just program them the way you seem to want to.

[This message has been edited by dorbie (edited 05-01-2003).]

Originally posted by graphicsMan:
So you never use any high level language? It’s MIPS, SPARC, and x86 for you, eh?
If you’re that worried about it, you shouldn’t trust hardware either. Hardware does have the occasional flaw…

And that from someone who doesnt even know how a linker works or what it does (see his “loading extensions, etc…” thread).

Of course, I use “high level” languages for trival tasks (like windowing, files, sockets, etc…).

But I am very concerned when it comes to process “scientific data” like from an pole expedition. You can bet I handcode the algo’s because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before…

Ever read the “dragon book”?

As for hardware, results are comparable between them, if two dont match take a 3’td to find out which one is at fault (x86,USparc,PowerPC…).

[This message has been edited by HS (edited 05-01-2003).]

HS –

I didn’t intend to offend. I was merely trying to indicate that trust needs to be placed in various parts of the software development pipeline. Also, I agree that some data is more sensitive, and if you require 100% assurance (minus alpha particles intercepting your memory) that your computation is correct, every step should be taken - including being very wary of your compiler. And yes, I’ve read SOME of the Dragon Book (it wasn’t so riveting that I read the whole thing).

Again, sorry that I offended.

Brian

[This message has been edited by graphicsMan (edited 05-01-2003).]

Originally posted by HS:
But I am very concerned when it comes to process “scientific data” like from an pole expedition. You can bet I handcode the algo’s because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before…

Seriously? A compiler is orders of magnitude more reliable than a human coding assembler.

I totally agree with Humus. Writing software in c/c++/delphi is way safer than coding it in assembler. And I’m not even talking about debugging here…