Why do different architectures seem to not produce the exact same results?

Status
Not open for further replies.

magomago

Lifer
Sep 28, 2002
10,973
14
76
Why does it seem that if I perform the same basic calculation on two different architectures, and find the difference between the two, I don't seem to get the same result at all. The difference always seems to be something like greater than 10^-6..typically 10^-8 or smaller.

ie: on an arm processor I do A= 6/47. I save this.

then on an x86 pc I also do B= 6/47. I save this.

Then I load up the calculation form on the arm processor, and using a pc, find the difference between the two numbers (A-B)...it comes out to be a small ridiculous number.


Can anyone comment on this? Can anyone direct to maybe a journal article review that explains this....I don't know what this phenomenon is called
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
I'm not sure whether to go into detail regarding different methods of floating point calculation and representation....

....or just go over the simple fact that CPUs aren't infinitely precise and the precision of your numbers depends on how many bits you feel like allocating to it (double precision/single precision/x87 floating point).

So what do you want?
 

0800peter

Junior Member
Jan 31, 2013
7
0
0
Why does it seem that if I perform the same basic calculation on two different architectures, and find the difference between the two,
...
Can anyone direct to maybe a journal article review that explains this....I don't know what this phenomenon is called

possibly http://en.wikipedia.org/wiki/Machine_epsilon
" .. Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. .."

or this page http://www.zinnamturm.eu/downloadsAC.htm#CpcFloat
" ..
Diagnosing floating point calculations precision.

Do you have numeric problems with real numbers? This is not a specific problem of Component Pascal. All programming languages have the same kind of problem. Here is a little module to determine the precision of floating point calculations. The result depends on the floating point implementation on your machine and not of the programming language definition.
.."


may help to bring some light in the dark

br
peter
 

piasabird

Lifer
Feb 6, 2002
17,168
60
91
It is quite possible that when division is performed that the processor has a set precision which is the default. In programming languages usually a variable for math may have a set precision (Max number of digits). An Example might be Integer and long integer. Sometimes how the restults are displayed causes rounding as part of the display. A lot depends on what you are doing with the numbers. For instance if you were computing tax based on a percentage, then maybe there is an accounting rule that says you never compute it past the 4th decimal point i.e. "9.0000". Otherwise it just gets to the point of absurdity.

I have seen some programs for calculators that do division and give the result in a text format which can be limited to the size of the text field.

What I am trying to say is the computer may get a specific result but how it is displayed may cause either roundting or exponential notation. The variable it is stored in may have a limit also.
 
Last edited:

ghost03

Senior member
Jul 26, 2004
372
0
76
It depends on a few things, not limited to:

1.) The programming language
2.) The data type being used in the programming language
3.) The instruction set/architecture

For 1.), different programming languages will handle storage and round-offs differently. For example, one might round 8.00015 to 8.0002 and another to 8.0001. Most should conform to the IEEE floating point standard.

For 2.), a developer has a fair amount of control of precision. Do I record the number as 8.00015 or do I just record it as 8.000, or maybe even 8.

For 3.), some architectures impose limitations on the precision available. For example, an Intel i7 has no problem adding 8.000000001 to 8.000000005 because its registers (where the numbers are stored before adding them) have enough memory. Most pocket calculators, on the other hand, physically cannot load numbers this precise.

Now, on the other hand, if you run the same arithmetic in the same language on the same chip and you still get different results, then, you are in trouble.
 
Status
Not open for further replies.