I got a question which may sound stupid but still it’s quite interesting for me (at least). And here it is - can I use co-processor in float variables calculations. I don’t know much how it’s in windows and OGL but when I was writing in dos it was possible to do. Always it makes the program more effective, isn’t it? Are there any bad sides?

Thanks !

I don´t know really much about assembler, but as far as i know current drivers always try to use the cpu as effective as possible. So if you have 3DNow! or intels counter-part, then a good driver already uses it.

However if you make some own matrix or vector calculations, which are calculated by the CPU anyway, you should be able to speed up this stuff by using assembler.

I don´t guaranty that drivers work this way, but from all what i read i think it is so.


Considering math co-processors have been built into the CPU since 486s (and also the 386DXs if I remember right) I would think that most modern compilers would take advantage of it already. I know both VC++ and Borland compilers have compiler flags to optimize the code for given compilers. (With VC++ the options are 386, 486, Pentium, Pentium Pro, and Blend which I assume tries to do some sort of blend.)

Ok. Thanks

The 486DX had a math coprocesser and the 486SX didn’t.

The 386DX had a 32bit memory interface while the SX only had a 16bit memory interface. Neither came with a built in FPU, you had to buy the 387 co processor to do that.

It’s all built into the CPU or GPU now, depending on what your hardware is.

Side note: I have an old 386 board with the math coprocessor installed hanging on my wall. I was the only kid on my block to have one, not that it helped much in most of the applications I ran at the time. Now it’s just a gigantic piece of nostalgia. (Gigantic physically, this thing is about 18"x16")