distributed computing project

hmmmm, and I thought people coded in assembler for performance, but not trusting the HL compiler??

You can find information on the bugs a particular compiler has on the web and just code around them.

HS, I imagine you have some strict guidelines to follow, as in “no choice but to do this”.

I had never any problems with debugging an assembly. Why should it be more difficult than debugging HL’s?

Oh, forgot it. About gamma correction: I guess you can skip it if you use render-to-texture.

Gotta agree with Humus, and then there’s the whole productivity thing.

What is a pole expedition, and why would it stress unusual compiler paths? In my experience, scientific code is one of the simpler kinds of code to deal with. Compilers can be very reliable nowadays and do a good job of generating efficient and correct code.

I did read the whole of the Dragon Book. (Aho, Sethi, etc?) It’s old. It only covers the basics, and modern compilers are much more sophisticated. What in the Dragon Book taught you to mistrust the compiler?

-Won

Sorry, but here the discussion is about doing scientific calculations in a gpu. Not if a high level compiller is reliable or if is easier to debug an assember program.

Thanks everybody that helped me,
Nuno Lopes

Sorry for the late response and nlopes I will get to your question later…

If you dont have any objections, I answer all the posts in one shot…

"pole expedion like in “Northpole expedition” the data is invalueable since it cant be “regenerated” not even by a new expedition (the ice moved since that).

The “routes” a compiler takes to compile a file are infinite and therefore unpredicable:

for(int i = 0; i < something; i++)
{
blah(i);
}

by pure logic is the same as:

int i = 10;
for(i=0;i< something :wink:
{
blah(i++);
}

The results “should” be same, right?
They arent, the compiler will take different “routes” to build the object code.

Thats why it takes years for a compiler to “mature”.

Humus from what I read you have a very good understanding about 3d graphics, but your statement “that a compiler is better than the human brain” is hmmmmm mistaken?

Dont get me wrong, a compiler may generate “faster” code, but it may as well “optimize” the hell out of code to always generate the same result: “42”.

I guess I am fighting against wind mills, but one day you may find that it wasnt non-sence I talked about. Compilers are “stupid” believe it or not.

As for using the GPU for “scientific” calculations…

I personly dont know ONE problem, I would be satisfied with (a) 32bit float result(s)…

[This message has been edited by HS (edited 05-02-2003).]

Yes, compilers are stupid and may have bugs. But what makes you think that your handwritten code will be more correct and less buggy? You’re not assuming you’re not going to make mistakes, are you? Making mistakes is the very nature of human beings, much more so than machines.

Originally posted by Humus:
Yes, compilers are stupid and may have bugs. But what makes you think that your handwritten code will be more correct and less buggy? You’re not assuming you’re not going to make mistakes, are you? Making mistakes is the very nature of human beings, much more so than machines.

Over 15 years of coding assembly and the understanding of what I code, perhaps?

If you think both of it is true for any “compiler” you dont know what you are talking about.

Of course I make misstakes, but what makes you think a (human written) compiler doesnt (espesically if its written by someone who dosent know ANYTHING about the problem)?

Thats pretty short sighted if you ask me.

This is of course WAY off topic…

[This message has been edited by HS (edited 05-02-2003).]

I have 8 years of experience of coding myself, and yes, I in general know what I’m doing while coding, but I still make mistakes pretty much every time I code something. It’s the human nature, and the very reason we have debugging tools in our development environment. I sure understand my code, and yes, I can write and understand assembler just fine too, but I can garantuee that my C++ code is way more reliable than my assembler code.

Compilers has a long track record of proven reliability. The whole software industry lives of code generated by compilers. These bugs are soon found and fixed. So, yes, the compiler is way more reliable than the code that you feed it with.

Even when Partition magic reorders my partitions and a fault will cause all my HD data to get lost I’m not going paranoid about that the compiler generated code might be faulty. The chances of the original source code being faulty is way higher, so if I’m going to worry I’m going to worry about that.

The problem is, regardless of how much experience you have you’re always going to make mistakes. Bugs will always be present in all software, even for small tiny apps. The chance of something breaking because of a programmer error is way higher than things breaking because of a compiler error. By avoiding the compiler you may escape the infinitesimal probability that the compiler generates faulty code, but you’re increasing the chances of human errors several times over. And if you’re going to be paranoid about the compiler, what garantuees are there that your assembler code will actually translate to the right binary code? CPU’s have erratas, do you design your own processor to work around that? (something compilers can do automatically btw …) What about the OS, will it read the correct HD block when you launch your app, and will your app actually get the HD block is requests when you load your data from the disk? Why trust the OS when you don’t trust the compiler? The OS is more likely to fail than the compiler IMO.

Hi again,

Can we back to the main topic, please??

Thanks,
Nuno Lopes

Originally posted by nlopes:
[b]Hi again,

Can we back to the main topic, please??

Thanks,
Nuno Lopes[/b]

Nuno -

Sorry, but I really think that you’re out of luck trying to do arbitrary precision stuff on the card… Who knows, maybe in a couple of card generations, it will be more feasible, but right now I think that trying to do anything with better than 32 bit fp precision is going to be super difficult - and probably faster on a CPU.

If you can get away with using single precision floats, then go ahead and learn OpenGL or D3D, and put that ol GPU to work.

Brian

I downloaded nvidia’s OpenGL SDK an I saw taht there is a folder named nv_math. Does this folder have anything to do with this topic??

Did you look at it?

I don’t know anything about OpenGL or graphics programmation. I must start learting OpenGL ASAP.

You can bet I handcode the algo’s because I cant thrust a compiler that may take paths in the compiling that have never been tested/debugged before…

Sounds quite ridiculous to me. Compilers have been tested by thousand, if not millions of people for years, are based on well-known technics, and are quite bug-proof nowadays. The rare bugs i’ve encountered were in the IDE, not the compiler itself. I’ve never seen incorrect code generated by my compiler, and i’ve been programming for 13 years.

And what do you mean by “hand-code” ? Are you coding directly in machine code ? If so, good luck. Because as far as i remember, assembler is also compiled.

Back on-topic: nv_math is a small library for maths, it does nothing with the graphics card. GPUs are fast because they are working asynchronously with the CPU. The only way to take advantage of the GPU is to use it asynchronously. If you submit some data to process to a graphics card (independantly on what card it is/ if it can do it, which is an entirely different problem), and if you need to get the result of this computation in order for the CPU to continue its work, then you’ve lost any advantage in this architecture, because the CPU will be idle, waiting for the GPU.

Rendering is a different matter, because the CPU doesn’t (generally) care about what’s currently displayed on screen to continue its processing. And when it happens, guess what? performance drops dramatically. Try reading the color or the Z-Buffer back, and look at your renderer go at software speeds.

Y.

First of all sory about my broken english.
Seccond: You can use OpenGL for calculations.
1.You need to create OpenGL hardware accelerated rendering context. Be careful. You may create slow software rendering context.
2.You must learn something about OpenGL matrix operations.
3.Precision is not important about computer vision. 3D accelerators are very strong in matrix computations. In computer vision you may chose “matrix oriented” algoritms in first few stages. But very hard work for computer is in identification of objects. This a database or AI problem. Sory, my knowlage in computer vision is not very big. But if you are needing of fast matrix computations without big precision
you can find some examples on www.opengl.org and learn some more from SDK help .

Happened to stumble upon this:
Using Modern Graphics Architectures for General-Purpose Computing: A Framework and Analysis

Just quickly browsed it and they seemed to compile with MSVC 6.0 only. It’s a start but such use of GPUs is not for the faint of heart.

EDIT: where is the preview functionality?-)

[This message has been edited by Macroz (edited 05-08-2003).]

Just a note to tell that ATI develloper’s relation as already answered me. They sayed that they will try to collect some information for me!

Great!! ATI seams to want to help me!!
Shame to nVIDIA that hasn’t answered me…

“I want to write an english book, can anybody give me example sentences. I don’t know english so far! Can i use english language to write a book?”.