understanding fp 8/16/32/64

z.o.r.g

Junior Member
Jul 16, 2017
3
0
6
need any good source link / any aticle journal etc im on my own self taught on gpu functions especially after volta come out in courious what and how its works im a already learn ieee 754 floating point but i still doesn't get how it works and what's its for > 8/16/32/64 FP on an gpus what specific it for what program will get advantage of each FP *and what difference betwen tesla and gtx card > what differen resource need for gaming and computing on an gpu* any source to understand programable tensor core and new cuda 9 (what difference from it predecessor) are very helpfull
 

serpretetsky

Senior member
Jan 7, 2012
642
26
101
What do you not understand about floating point? How it's implemented or why it exists to begin with?

Do you understand why floating point exists to begin with? Do you know what limitations there are with fixed point arithmetic?

FP8/16/32/64 all just allow different precisions and different ranges of values. In some calculations you might not need very much precision. When you measure a book shelf you probably don't care that it is 43.752", you just round up to 44". Similarly, when you measure a book shelf you probably don't care to use a ruler measure anywhere from 0.001" up to 1000 miles, you just grab a 10' measuring tape and that is good enough. FP32/64 require more computing hardware than fp8/fp16.