Do modern video cards calculate everything in 128-bit floating-point?

Discussion in 'Video Cards and Graphics' started by VirtualLarry, Aug 6, 2013.

  1. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    33,390
    Likes Received:
    36
    I was reading a specification document from AMD, about my HD4850, and it claimed that everything was done in 128-bit floating point. Which is a lot of precision.

    I use my GPU for distributed-computing, in which that precision is, I assume, useful.

    Are modern (less power-hungry) GPUs just as high in precision, or more? Or have they gone more gaming-oriented, with lower FP precision?

    Is GCN, considered 128-bit floating point? More? Less?
     
  2. SOFTengCOMPelec

    SOFTengCOMPelec Platinum Member

    Joined:
    May 9, 2013
    Messages:
    2,123
    Likes Received:
    1
    It's up to 64 bit resolution.

    Multiples of 64 bits, e.g. 128, 256 are referring to floating point operations, where one instruction, has multiple floating point data, operated on in ONE go (SIMD, single instruction, multiple data).
    Possibly other methods of parallelism, BUT NOT precision >64 bits.

    80 bits is available in old x86/87 mode (usually on x86 cpus NOT graphics cards).

    No hardware support of greater than (floating point RESOLUTION) 80 bits exists in the X86/PC-graphics card world, as far as I am aware.

    Floating point resolution >80 bits is available in software (libraries and emulation, e.g. GMP) and/or FPGAs.

    More information, and conformation

    I can't quickly find links for 128++ bit graphics processors, but when I somewhat extensively researched into cpus and probably graphics cpus, a while ago, it came up that it is NOT 128 bit floating point PRECISION.
    I.e. it splits into smaller parts, which it processes in one go (i.e. vector or similar).
     
    #2 SOFTengCOMPelec, Aug 6, 2013
    Last edited: Aug 7, 2013
  3. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    33,390
    Likes Received:
    36
  4. Rok125

    Rok125 Junior Member

    Joined:
    Jul 13, 2012
    Messages:
    5
    Likes Received:
    0
    Source

    Hope this helps!
     
  5. SOFTengCOMPelec

    SOFTengCOMPelec Platinum Member

    Joined:
    May 9, 2013
    Messages:
    2,123
    Likes Received:
    1

    They are probably trying to write things in a way which is "arguably" technically correct (but NOT really), but is misleading to many people.

    128-bit SIMDs are quoted as 128 bit floating point ALL over the place, I've seen it lots of times. But it IS NOT referring to 128-bit PRECISION, sadly!

    If you want to counterclaim this via links, ideally please use NON-AMD ones, as AMD are usually the source of these confusing ones in the first place.

    1st 5GHz cpus anyone ?

    Source
     
  6. SOFTengCOMPelec

    SOFTengCOMPelec Platinum Member

    Joined:
    May 9, 2013
    Messages:
    2,123
    Likes Received:
    1
    But double precision is 64 bit.
     
  7. SOFTengCOMPelec

    SOFTengCOMPelec Platinum Member

    Joined:
    May 9, 2013
    Messages:
    2,123
    Likes Received:
    1
    There are places on the internet, which explain it perhaps better than I have been explaining it.

    But I will have a quick go, again.
    In VERY rough detail.

    Graphics cards process (normally) pixels.

    Pixels can have have multiple information defining the pixel.

    E.g. RGB levels. (Red, Green, Blue) (Probably a bad example, as integer would do, I guess)
    HDR
    Or whatever you want associated with the pixel.

    To make it compute as fast as possible, this information is bunched together, to make e.g. 128 bit floating point "precision".

    But it actually consists of 4 lots of 32 bit single precision floating point numbers, defining the RGB intensity levels, or whatever you are doing with your graphics card.

    The "128 bit floating point precision", seems to come into existence, because they chopped the fuller definition.

    Which would be something like ...

    "128 bit floating point precision, which consists of 4 single precision 32 bit floating point values, making up the complete pixel definition"

    So they dropped the later words, in all likelihood.

    There are a number of discussions about this, which can be linked here, if you want.

    But if you can prove/indicate that it really is 128 bit floating point precision, please go ahead.

    EDIT: DISCLAIMER: I AM NOT a graphics card programmer, my examples may be TERRIBLE. But they are trying to explain a concept.
     
    #7 SOFTengCOMPelec, Aug 7, 2013
    Last edited: Aug 7, 2013
  8. ViRGE

    ViRGE Elite Member, Moderator Emeritus

    Joined:
    Oct 9, 1999
    Messages:
    31,078
    Likes Received:
    8
    They are however correct. The 128 bit number we've seen thrown around for the better part of 10 years now is exactly as you state: it's based on the ability to work with FP32 per channel color, which in standard RGBA format would add up to 128 bits per pixel.

    Note that this doesn't really have anything to do with SIMDs (as someone else posted). RV700 was a VLIW5 architecture, meaning it actually processes up to 5 32bit operations per SP block. So if this were based on the width of the execution units, you'd actually have a "160bit" processor.
     
  9. VirtualLarry

    VirtualLarry Lifer

    Joined:
    Aug 25, 2001
    Messages:
    33,390
    Likes Received:
    36
    So, AMD is lying, basically? Btw, the CPU in question does run at 5Ghz, I don't see the issue.
     
  10. SOFTengCOMPelec

    SOFTengCOMPelec Platinum Member

    Joined:
    May 9, 2013
    Messages:
    2,123
    Likes Received:
    1
    I've found a link which might help it make more sense.

    But I will try to pad it out myself, then bring in the link.

    The colour information (for the pixel) IS 128-bit floating point precision. (But this means 4 lots of 32 bit floating point, combined into one 128 bit register/memory).

    BUT the colour information consists of individual values, such as RED, green, blue etc, each of which are actually 32-bit single floating point values.

    So the spec sheets say stuff like (the final link, is where all these quotes come from):

    The combined colour information is:
    But each combined colour information (for a single pixel), actually is made up of:
    I.e. Each component is 32 bits of floating point precision, when all 4 colour values/attributes are combined, they make a 128-bit floating point value (which is precise to a maximum of 32 bits, per component).

    All quotes are from this file

    Later it says:

    Because the 4 individual (32 bit) RGB etc attributes, combine to make a 128 bit value.

    -------------------------------

    The 5 GHz controversy is because it IS (sort of) a 5GHz processor (AMD), but does not normally (no overclocking) go to 5 GHz on ALL cores, because the 5GHz is the turbo mode value, rather than the all cores running value.


    -----------------------------

    Quick and nasty explanation: tl;dr

    Immediately after the spec sheet says "128 bit precision", it immediately says that it splits into 4 individual 32 bit values.

    Which I take to mean that the 128-bits are split into 4 lots of 32 bits.
     
  11. Cerb

    Cerb Elite Member

    Joined:
    Aug 26, 2000
    Messages:
    17,409
    Likes Received:
    0
    15-bit color: 5b R, G, B
    16-bit color: 5b R, G, B, and 1b luminance
    24-bit color: 8b R, G, B
    32-bit color: 8b R, G, B, A
    128-bit color: 32b R, G, B, A

    Replace RGBA with whatever else is used by some given map.

    It's a lot like SIMD, but defines a higher-level interface's API compatibility. Whether the underlying hardware actually does 128 bits at a time doesn't really matter, as long as the driver can work with calls based on packed 128-bit values to and from the buffers.