• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

understanding fp 8/16/32/64

z.o.r.g

Junior Member
need any good source link / any aticle journal etc im on my own self taught on gpu functions especially after volta come out in courious what and how its works im a already learn ieee 754 floating point but i still doesn't get how it works and what's its for > 8/16/32/64 FP on an gpus what specific it for what program will get advantage of each FP *and what difference betwen tesla and gtx card > what differen resource need for gaming and computing on an gpu* any source to understand programable tensor core and new cuda 9 (what difference from it predecessor) are very helpfull
 
What do you not understand about floating point? How it's implemented or why it exists to begin with?

Do you understand why floating point exists to begin with? Do you know what limitations there are with fixed point arithmetic?

FP8/16/32/64 all just allow different precisions and different ranges of values. In some calculations you might not need very much precision. When you measure a book shelf you probably don't care that it is 43.752", you just round up to 44". Similarly, when you measure a book shelf you probably don't care to use a ruler measure anywhere from 0.001" up to 1000 miles, you just grab a 10' measuring tape and that is good enough. FP32/64 require more computing hardware than fp8/fp16.
 
Back
Top