How are floating points stored in memory?

Discussion in 'Highly Technical' started by xtknight, Feb 15, 2006.

  1. xtknight

    xtknight Elite Member

    Joined:
    Oct 15, 2004
    Messages:
    12,974
    Likes Received:
    0
    Well with integers, if you have 8 bits, then the number 52 would look like the following:

    00110100

    But, how what would 5.23 look in memory as a 8-bit floating point (is 8-bit enough for that)? How does binary accomodate places after the decimal (tenths, hundredths, thousandths, etc.)??
     
  2. Bassyhead

    Bassyhead Diamond Member

    Joined:
    Nov 19, 2001
    Messages:
    4,545
    Likes Received:
    0
  3. Matthias99

    Matthias99 Diamond Member

    Joined:
    Oct 7, 2003
    Messages:
    8,808
    Likes Received:
    0
    By using a floating-point number format. :confused:

    It's actually simpler to consider fixed-point binary numbers first; in this case, some of the bits are used for the part before the decimal point, and some of them are used for the part after the decimal point.

    In any case, you have to treat the part after the decimal point as a sum of binary fractions. It's treated the same way as any other number base; the first digit after the decimal point is 2^(-1), then 2^(-2), then 2^(-3), etc.

    So, with a fixed-point format with four bits for each 'side':

    0100.0101

    would be:

    0 * 2^3 + 1 * 2^2 + 0 * 2^1 + 0 * 2^0 + 0 * 2 ^ (-1) + 1 * 2 ^ (-2) + 0 * 2 ^ (-3) + 1 * 2 ^ (-4)

    or:

    4 + 1/4 + 1/16

    or 4.3125 in base 10.

    A so-called 'floating-point' format uses some of the bits of the number to specify how many bits are on each side of the decimal point. The IEEE floating-point formats used in most of today's computers work like this, although it is usually interpreted as a fixed-point real number 'mantissa' and an 'exponent' that is used to shift the decimal point (as in scientific notation).
     
  4. blahblah99

    blahblah99 Platinum Member

    Joined:
    Oct 10, 2000
    Messages:
    2,689
    Likes Received:
    0
    There's a IEEE specification for storing floating point numbers, which usually consists of 8 bytes of memory.
     
  5. interchange

    interchange Platinum Member

    Joined:
    Oct 10, 1999
    Messages:
    2,096
    Likes Received:
    4
    There are single precision floating points (float data type in C/C++) and double precision floating points (double data type in C/C++). Single precision requires 32 bits, and double precision requires 64 bits.

    Floating point numbers are stored in this format:
    m x b^e

    Where m is the mantissa, an integer number, b is base (in our case 2) and e is exponent.

    For single precision floating point, we have the following:
    1 sign bit
    8 exponent bits (biased by 127)
    23 mantissa bits

    For double precision floating point, we have:
    1 sign bit
    11 exponent bits (biased by 1023)
    52 mantissa bits

    To make a number, say -143.40625 into single precision floating point, first let's do its binary representation with the decimal point in there: 10001111.01101 (note, each bit after the decimal point is 1/2, 1/4, 1/8, etc.).

    Then we justify it so that it's 1.xxxxx * 2^x, so we move the decimal place over 7 places: 1.000111101101 * 2^7.

    so our exponent is 7, but it needs to be biased by 127 (this is done so that we can support really tiny numbers, e.g. what if we had a fraction with a negative exponent of 104...can't write -104 in normal binary, so we would bias it by 127 and the exponent would become 23). In our case, our exponent is now 134, or 10000110 in binary.

    Our mantissa is 000111101101, filled out with zeroes to a length of 23 bits, it's 00011110110100000000000.

    Our sign bit is 1 because it's a negative number, so we get:
    1 10000110 00011110110100000000000
    or 0xC30F6800 in hexadecimal.

    Hope that made sense...
     
  6. xtknight

    xtknight Elite Member

    Joined:
    Oct 15, 2004
    Messages:
    12,974
    Likes Received:
    0
    Thanks for the explanations guys, especially interchange! I understand it now for the most part. But what is bias?
     
  7. Born2bwire

    Born2bwire Diamond Member

    Joined:
    Oct 28, 2005
    Messages:
    9,843
    Likes Received:
    0
    Normally, negative numbers are represented by designating the MSb as being the sign bit. This solves the problem of differentiating between positive and negative numbers but halves the range of possible values. To solve this, the exponent is biased by 127 or 1023 such that it is always positive. So if your exponent represented in memory is 0, then the actual exponent is -127 or -1023 depending upon the precision.
     
  8. smack Down

    smack Down Diamond Member

    Joined:
    Sep 10, 2005
    Messages:
    4,507
    Likes Received:
    0
    Most of the time negitive numbers are represented in 2's complement format because it allows the same hardware to do addition and subtraction. Using a sign bit wastes only one value, there is both a plus and minus zero. The bias is used for no good reason.
     
  9. xtknight

    xtknight Elite Member

    Joined:
    Oct 15, 2004
    Messages:
    12,974
    Likes Received:
    0
    Sorry, I'm not exceptionally strong in mathematics and I'm having a hard time finding out what the term 'bias' means in this context. Is it just a multiplier or divisor or what?
     
  10. Born2bwire

    Born2bwire Diamond Member

    Joined:
    Oct 28, 2005
    Messages:
    9,843
    Likes Received:
    0
    Did I write that crap? Geez, that's it. New rule. No more posting when I've had only four hours of sleep. Obviously while the maximum value of a number with a sign bit is half of what it was originally, you have both the positive and negative range which effectively doubles the number of values. Thus, the sign bit decreases the absolute value represented but does not decrease the range of values. Go college education. :roll:

    Bias is just the term they use to mean offset. Every exponential number stored in memory is the sum of the actual exponent and the bias number. Hence, 2^10 would have the exponent of 137 (for single precision) stored as the exponent in memory.
     
  11. interchange

    interchange Platinum Member

    Joined:
    Oct 10, 1999
    Messages:
    2,096
    Likes Received:
    4
    Did I write that crap? Geez, that's it. New rule. No more posting when I've had only four hours of sleep. Obviously while the maximum value of a number with a sign bit is half of what it was originally, you have both the positive and negative range which effectively doubles the number of values. Thus, the sign bit decreases the absolute value represented but does not decrease the range of values. Go college education. :roll:

    Bias is just the term they use to mean offset. Every exponential number stored in memory is the sum of the actual exponent and the bias number. Hence, 2^10 would have the exponent of 137 (for single precision) stored as the exponent in memory.[/quote]

    I'm not sure why they don't use 2's complement for the exponent. It works as well as having an offset. The range would be -128 to 127...whereas the range with a 127 offset is -127 to 128. Oh well...it's just the method they went with to ensure that you can have negative exponents to represent very tiny numbers.
     
  12. CTho9305

    CTho9305 Elite Member

    Joined:
    Jul 26, 2000
    Messages:
    9,214
    Likes Received:
    0
    The nice thing about the way they do it is you can check for > and < with standard 2's complement comparison logic, because the bit pattern for a larger floating point number always represents a larger 2's complement integer.
     
  13. smack Down

    smack Down Diamond Member

    Joined:
    Sep 10, 2005
    Messages:
    4,507
    Likes Received:
    0
    I don't think that is true but I haven't looked in detail how floating point defines < > wouldn't plus and minus zero be difened as equal but they have different bit parterns so an interger comparision would give a not equals result.
     
  14. CTho9305

    CTho9305 Elite Member

    Joined:
    Jul 26, 2000
    Messages:
    9,214
    Likes Received:
    0
    Will a correct implementation produce negative zero?
     
  15. smack Down

    smack Down Diamond Member

    Joined:
    Sep 10, 2005
    Messages:
    4,507
    Likes Received:
    0
    It is a valid floating point number, so a correct implementation can produce negitive zero. It is also valid to turn any negitive zeros into a positive zeros so an implementation could never produce negitive zero but it still would need to have the hardware to ensure -0=+0
     
  16. CTho9305

    CTho9305 Elite Member

    Joined:
    Jul 26, 2000
    Messages:
    9,214
    Likes Received:
    0
    I guess you have to special-case negative 0.
     
  17. Born2bwire

    Born2bwire Diamond Member

    Joined:
    Oct 28, 2005
    Messages:
    9,843
    Likes Received:
    0
    Not really. The IEEE floating point standard, which is what is generally used, requires that the leading term of the mantissa to implicitly incorporate 1. That is, if the number to be represented in binary is: 101.01001... etc, then the exponent is 129 (base 10) and the mantissa is represented as .0101001... The leading one is not expressed in the mantissa because it is assumed to always be implied. This way, we can save on a digit and increase the precision of our number. This is a small detail that I do not think anyone has stated here previously. Hence, there is no way to express zero in floating point. What they do is that they specify a special number to represent zero, just as they use a special number to represent infinite, NAN (not a number), and I recall that there is a third error number that comes up as well (IDE, I think it means indefinite. It comes up for me if I multiply an infinite number by 0 I believe). Anyways, the point is, there is no positive or negative zero, just a special case that is 0.
     
  18. smack Down

    smack Down Diamond Member

    Joined:
    Sep 10, 2005
    Messages:
    4,507
    Likes Received:
    0
    The IEEE standard defines two zeros. Positive zero is defend as all bit in the number are zero and negitive zero is defined as all zeros except that the sign bit is set. They are both special cases of zero defined by the IEEE standard.

    Having two zeros makes calculating the sign bit quicker. For multiplication it will always be the xor of the two sign bits.
     
  19. TuxDave

    TuxDave Lifer

    Joined:
    Oct 8, 2002
    Messages:
    10,576
    Likes Received:
    0
    Then again there are cases of denormalized numbers where the leading one is not present because the number is too small. For example, assuming the exponent can only obtain values of +127 to -127, the number 0.00001 x 2^-127 will not have the leading one but at the same time will not be rounded to zero since we still have enough precision to represent it. But you are correct that the typical implementation assumes a leading 1 and requires a case to detect denormalized cases like the above.