Go Back   AnandTech Forums > Hardware and Technology > Video Cards and Graphics

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Home and Garden
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 08-06-2013, 10:43 PM   #1
VirtualLarry
Lifer
 
VirtualLarry's Avatar
 
Join Date: Aug 2001
Posts: 26,400
Default Do modern video cards calculate everything in 128-bit floating-point?

I was reading a specification document from AMD, about my HD4850, and it claimed that everything was done in 128-bit floating point. Which is a lot of precision.

I use my GPU for distributed-computing, in which that precision is, I assume, useful.

Are modern (less power-hungry) GPUs just as high in precision, or more? Or have they gone more gaming-oriented, with lower FP precision?

Is GCN, considered 128-bit floating point? More? Less?
__________________
Rig(s) not listed, because I change computers, like some people change their socks.
ATX is for poor people. And 'gamers.' - phucheneh
haswell is bulldozer... - aigomorla
"DON'T BUY INTEL, they will send secret signals down the internet, which
will considerably slow down your computer". - SOFTengCOMPelec
VirtualLarry is online now   Reply With Quote
Old 08-06-2013, 11:03 PM   #2
SOFTengCOMPelec
Golden Member
 
SOFTengCOMPelec's Avatar
 
Join Date: May 2013
Location: UK
Posts: 1,140
Default

It's up to 64 bit resolution.

Multiples of 64 bits, e.g. 128, 256 are referring to floating point operations, where one instruction, has multiple floating point data, operated on in ONE go (SIMD, single instruction, multiple data).
Possibly other methods of parallelism, BUT NOT precision >64 bits.

80 bits is available in old x86/87 mode (usually on x86 cpus NOT graphics cards).

No hardware support of greater than (floating point RESOLUTION) 80 bits exists in the X86/PC-graphics card world, as far as I am aware.

Floating point resolution >80 bits is available in software (libraries and emulation, e.g. GMP) and/or FPGAs.

More information, and conformation

Quote:
However, these processors do not operate on individual numbers that are 128 binary digits in length, only their registers have the size of 128-bits.
Quote:
where 128-bit vector registers are used to store several smaller numbers, such as four 32-bit floating-point numbers
Quote:
the x86 architecture supports 80-bit floating points that store and process 64-bit signed integers (-263...263-1) accurately.
I can't quickly find links for 128++ bit graphics processors, but when I somewhat extensively researched into cpus and probably graphics cpus, a while ago, it came up that it is NOT 128 bit floating point PRECISION.
I.e. it splits into smaller parts, which it processes in one go (i.e. vector or similar).

Last edited by SOFTengCOMPelec; 08-07-2013 at 01:51 AM. Reason: More information link added
SOFTengCOMPelec is offline   Reply With Quote
Old 08-07-2013, 05:42 PM   #3
VirtualLarry
Lifer
 
VirtualLarry's Avatar
 
Join Date: Aug 2001
Posts: 26,400
Default

http://www.amd.com/us/products/deskt...fications.aspx

Quote:
128-bit floating point precision for all operations
They clearly state 128-bit precision, not 128-bit FP vector registers.
__________________
Rig(s) not listed, because I change computers, like some people change their socks.
ATX is for poor people. And 'gamers.' - phucheneh
haswell is bulldozer... - aigomorla
"DON'T BUY INTEL, they will send secret signals down the internet, which
will considerably slow down your computer". - SOFTengCOMPelec
VirtualLarry is online now   Reply With Quote
Old 08-07-2013, 07:23 PM   #4
Rok125
Junior Member
 
Rok125's Avatar
 
Join Date: Jul 2012
Location: Atlanta, Georgia
Posts: 5
Default

Quote:
Double precision is fairly common on newer GPUs. For instance I own a NVIDIA GTX560 Ti (fairly low end when it comes to computing) that has no issue running ViennaCL in double precision. From here (section 4) it appears all NVIDIA cards from GTX4xx onward support double precision natively.
Source

Hope this helps!
Rok125 is offline   Reply With Quote
Old 08-07-2013, 07:53 PM   #5
SOFTengCOMPelec
Golden Member
 
SOFTengCOMPelec's Avatar
 
Join Date: May 2013
Location: UK
Posts: 1,140
Default

Quote:
Originally Posted by VirtualLarry View Post
http://www.amd.com/us/products/deskt...fications.aspx


They clearly state 128-bit precision, not 128-bit FP vector registers.

They are probably trying to write things in a way which is "arguably" technically correct (but NOT really), but is misleading to many people.

128-bit SIMDs are quoted as 128 bit floating point ALL over the place, I've seen it lots of times. But it IS NOT referring to 128-bit PRECISION, sadly!

If you want to counterclaim this via links, ideally please use NON-AMD ones, as AMD are usually the source of these confusing ones in the first place.

1st 5GHz cpus anyone ?

Quote:
Quadruple-precision (128-bit) hardware implementation should not be confused with "128-bit FPUs" that implement SIMD instructions, such as Streaming SIMD Extensions or AltiVec, which refers to 128-bit vectors of four 32-bit single-precision or two 64-bit double-precision values that are operated on simultaneously.
Source
SOFTengCOMPelec is offline   Reply With Quote
Old 08-07-2013, 07:53 PM   #6
SOFTengCOMPelec
Golden Member
 
SOFTengCOMPelec's Avatar
 
Join Date: May 2013
Location: UK
Posts: 1,140
Default

Quote:
Originally Posted by Rok125 View Post
Source

Hope this helps!
But double precision is 64 bit.
SOFTengCOMPelec is offline   Reply With Quote
Old 08-07-2013, 08:21 PM   #7
SOFTengCOMPelec
Golden Member
 
SOFTengCOMPelec's Avatar
 
Join Date: May 2013
Location: UK
Posts: 1,140
Default

Quote:
Originally Posted by VirtualLarry View Post
http://www.amd.com/us/products/deskt...fications.aspx


They clearly state 128-bit precision, not 128-bit FP vector registers.
There are places on the internet, which explain it perhaps better than I have been explaining it.

But I will have a quick go, again.
In VERY rough detail.

Graphics cards process (normally) pixels.

Pixels can have have multiple information defining the pixel.

E.g. RGB levels. (Red, Green, Blue) (Probably a bad example, as integer would do, I guess)
HDR
Or whatever you want associated with the pixel.

To make it compute as fast as possible, this information is bunched together, to make e.g. 128 bit floating point "precision".

But it actually consists of 4 lots of 32 bit single precision floating point numbers, defining the RGB intensity levels, or whatever you are doing with your graphics card.

The "128 bit floating point precision", seems to come into existence, because they chopped the fuller definition.

Which would be something like ...

"128 bit floating point precision, which consists of 4 single precision 32 bit floating point values, making up the complete pixel definition"

So they dropped the later words, in all likelihood.

There are a number of discussions about this, which can be linked here, if you want.

But if you can prove/indicate that it really is 128 bit floating point precision, please go ahead.

EDIT: DISCLAIMER: I AM NOT a graphics card programmer, my examples may be TERRIBLE. But they are trying to explain a concept.

Last edited by SOFTengCOMPelec; 08-07-2013 at 08:43 PM. Reason: Disclaimer
SOFTengCOMPelec is offline   Reply With Quote
Old 08-08-2013, 11:25 AM   #8
ViRGE
Super Moderator
Elite Member
 
ViRGE's Avatar
 
Join Date: Oct 1999
Posts: 30,267
Default

Quote:
Originally Posted by SOFTengCOMPelec View Post
EDIT: DISCLAIMER: I AM NOT a graphics card programmer, my examples may be TERRIBLE. But they are trying to explain a concept.
They are however correct. The 128 bit number we've seen thrown around for the better part of 10 years now is exactly as you state: it's based on the ability to work with FP32 per channel color, which in standard RGBA format would add up to 128 bits per pixel.

Note that this doesn't really have anything to do with SIMDs (as someone else posted). RV700 was a VLIW5 architecture, meaning it actually processes up to 5 32bit operations per SP block. So if this were based on the width of the execution units, you'd actually have a "160bit" processor.
__________________
ViRGE
Team Anandtech: Assimilating a computer near you!
GameStop - An upscale specialized pawnshop that happens to sell new games on the side
Todd the Wraith: On Fruit Bowls - I hope they prove [to be] as delicious as the farmers who grew them
ViRGE is offline   Reply With Quote
Old 08-08-2013, 05:04 PM   #9
VirtualLarry
Lifer
 
VirtualLarry's Avatar
 
Join Date: Aug 2001
Posts: 26,400
Default

Quote:
Originally Posted by SOFTengCOMPelec View Post
If you want to counterclaim this via links, ideally please use NON-AMD ones, as AMD are usually the source of these confusing ones in the first place.

1st 5GHz cpus anyone ?
So, AMD is lying, basically? Btw, the CPU in question does run at 5Ghz, I don't see the issue.
__________________
Rig(s) not listed, because I change computers, like some people change their socks.
ATX is for poor people. And 'gamers.' - phucheneh
haswell is bulldozer... - aigomorla
"DON'T BUY INTEL, they will send secret signals down the internet, which
will considerably slow down your computer". - SOFTengCOMPelec
VirtualLarry is online now   Reply With Quote
Old 08-08-2013, 05:40 PM   #10
SOFTengCOMPelec
Golden Member
 
SOFTengCOMPelec's Avatar
 
Join Date: May 2013
Location: UK
Posts: 1,140
Default

Quote:
Originally Posted by VirtualLarry View Post
So, AMD is lying, basically? Btw, the CPU in question does run at 5Ghz, I don't see the issue.
I've found a link which might help it make more sense.

But I will try to pad it out myself, then bring in the link.

The colour information (for the pixel) IS 128-bit floating point precision. (But this means 4 lots of 32 bit floating point, combined into one 128 bit register/memory).

BUT the colour information consists of individual values, such as RED, green, blue etc, each of which are actually 32-bit single floating point values.

So the spec sheets say stuff like (the final link, is where all these quotes come from):

The combined colour information is:
Quote:
128-bit IEEE floating-point precision graphics pipeline
But each combined colour information (for a single pixel), actually is made up of:
Quote:
32-bit floating point color precision per component
I.e. Each component is 32 bits of floating point precision, when all 4 colour values/attributes are combined, they make a 128-bit floating point value (which is precise to a maximum of 32 bits, per component).

All quotes are from this file

Later it says:

Quote:
128-bit color precision
Because the 4 individual (32 bit) RGB etc attributes, combine to make a 128 bit value.

-------------------------------

The 5 GHz controversy is because it IS (sort of) a 5GHz processor (AMD), but does not normally (no overclocking) go to 5 GHz on ALL cores, because the 5GHz is the turbo mode value, rather than the all cores running value.


-----------------------------

Quick and nasty explanation: tl;dr

Immediately after the spec sheet says "128 bit precision", it immediately says that it splits into 4 individual 32 bit values.

Quote:
128-bit IEEE floating-point precision graphics pipeline
32-bit floating point color precision per component
Which I take to mean that the 128-bits are split into 4 lots of 32 bits.
SOFTengCOMPelec is offline   Reply With Quote
Old 08-08-2013, 06:14 PM   #11
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 15,961
Default

15-bit color: 5b R, G, B
16-bit color: 5b R, G, B, and 1b luminance
24-bit color: 8b R, G, B
32-bit color: 8b R, G, B, A
128-bit color: 32b R, G, B, A

Replace RGBA with whatever else is used by some given map.

It's a lot like SIMD, but defines a higher-level interface's API compatibility. Whether the underlying hardware actually does 128 bits at a time doesn't really matter, as long as the driver can work with calls based on packed 128-bit values to and from the buffers.
__________________
Quote:
Originally Posted by Crono View Post
I'm 90% certain the hipster movement was started by aliens from another galaxy who have an exaggerated interpretation of earth culture(s).
Cerb is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 03:58 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.