emulating fixed point hardware in floating point

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
Can 100% accuracy ever be achieved? If so, can it always be achieved with good programming, documentation, etc.?

I had been wondering about this.

I know that fixed point hardware can't emulate floating point with 100% accuracy.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Can 100% accuracy ever be achieved? If so, can it always be achieved with good programming, documentation, etc.?

I had been wondering about this.

I know that fixed point hardware can't emulate floating point with 100% accuracy.

Of course you can construct algorithms based on doubles (floating pt math) that emulate the output of a fixed hardware setup.

Is that the question? Maybe I'm not understanding what you are asking.
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
Yeah, to be honest in an old lab program (ran on IGOR) I was porting from 16 bit to 64 bit I ended up doing just this. I wasn't really familiar with the syntax (or the changes from like 15 years worth of revisions) so as last ditch effort I just did a search and replace from int8's and int16's to all floats. Considering the program was incredibly simple and designed to run a 33mhz 386SX , I doubt the lack of optimization really mattered. My graduate adviser never really did stop bragging about how her Biochemist was every bit as talented as a CS (and my ego wouldn't let me correct her on just how wrong she was).
 

nismotigerwvu

Golden Member
May 13, 2004
1,568
33
91
Not to completely thread-jack, but IDC, what would you estimate the die size of a 386 would be now on a modern process? Something tells me in the back of my mind that after the inital die shrink (I'm guessing down to 1um) it was like ~40mm2 but again, this is pretty hazy. I know they were the rad-hard chips of choice long after their desktop usefulness had passed, so I wasn't sure if they had been shrunk farther.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Not to completely thread-jack, but IDC, what would you estimate the die size of a 386 would be now on a modern process? Something tells me in the back of my mind that after the inital die shrink (I'm guessing down to 1um) it was like ~40mm2 but again, this is pretty hazy. I know they were the rad-hard chips of choice long after their desktop usefulness had passed, so I wasn't sure if they had been shrunk farther.

42780.png


A 386 with its paltry 275k xtors on Intel's 32nm would occupy somewhere around 0.07 mm^2 :eek:
 

Magic Carpet

Diamond Member
Oct 2, 2011
3,477
234
106
A 386 with its paltry 275k xtors on Intel's 32nm would occupy somewhere around 0.07 mm^2 :eek:
How difficult/expensive would that be to manufacture?

Say, I have $10M in my bank account. Would that be enough for a few samples?
 

WhoBeDaPlaya

Diamond Member
Sep 15, 2000
7,415
404
126
For some reason, this thread brings Carmack's 2 iteration NR square-root computation to mind.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
How difficult/expensive would that be to manufacture?

Say, I have $10M in my bank account. Would that be enough for a few samples?

If you were Intel and wanted to do it, given that they'd already have the IP and pre-existing designs to build on in the shrink as well as having already put 32nm into production, it would cost them maybe $10m-$20m to get the first lot of wafers out of the fab. (maybe even more, depending on how they internally account for overall operating expenses of the business itself, plus the apportioned depreciation of assets and so on)

The majority of the cost would be in the masksets at that point, and the die of course won't be 0.07mm^2 because the IO stuff would dominate the size of the chip.

Question is - what would you do with it?
 

Absolution75

Senior member
Dec 3, 2007
983
3
81
You won't ever be able to represent all numbers in a particular decimal point format, be it fixed or floating point. You'd need an infinite number of bits just as though there are an infinite number of possible numbers.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
You won't ever be able to represent all numbers in a particular decimal point format, be it fixed or floating point. You'd need an infinite number of bits just as though there are an infinite number of possible numbers.

I don't think that's what the OP was asking.

Neither fixed point hardware nor doubles (floating pt) are analog, so of course they are limited to that of machine precision and the quantization error will nearly always be non-zero.

But the question was more can you use doubles math to emulate the end result of using fixed point hardware, to which the anwer is yes.

encode1_2.gif