Why does it seem that if I perform the same basic calculation on two different architectures, and find the difference between the two, I don't seem to get the same result at all. The difference always seems to be something like greater than 10^-6..typically 10^-8 or smaller.
ie: on an arm processor I do A= 6/47. I save this.
then on an x86 pc I also do B= 6/47. I save this.
Then I load up the calculation form on the arm processor, and using a pc, find the difference between the two numbers (A-B)...it comes out to be a small ridiculous number.
Can anyone comment on this? Can anyone direct to maybe a journal article review that explains this....I don't know what this phenomenon is called
ie: on an arm processor I do A= 6/47. I save this.
then on an x86 pc I also do B= 6/47. I save this.
Then I load up the calculation form on the arm processor, and using a pc, find the difference between the two numbers (A-B)...it comes out to be a small ridiculous number.
Can anyone comment on this? Can anyone direct to maybe a journal article review that explains this....I don't know what this phenomenon is called