It has to do with the combination of both. 1.2 in a fixed-point can be exactly represented, or not, depending (usually, it is, because the fixed point number exists for handling things like money, which are typically in decimal).The inexactness of 1.2 has to do with the base 2 representation, not the floatingness of the point. And most people use a base 2 representation, both for floats and for fixed point numbers
Lua, as well. I've had to fight it with both of them. FP numbers are great, but they just aren't ints, and a programming language really should have ints available.Javascript, you were the chosen one!![]()
![]()
![]()
They need significant digits, rather than X places before/after the decimal. That's really the difference. Integers of a given size have equal or better accuracy, for their value range, but that range always includes a ones digit as the least significant. Great for managing data structures, inventories, text values, and money, but bad for...really, most everything else.Scientific computations that need a lot of accuracy do use floats.
And 1.2 in floating point can be represented, or not, depending. The thing both of those depend on is what the base is, not the fixedness of the radix point.It has to do with the combination of both. 1.2 in a fixed-point can be exactly represented, or not, depending (usually, it is, because the fixed point number exists for handling things like money, which are typically in decimal).
There are all sorts of ways they fall on their face. (Especially if you try to implement them yourself, then you're doomed.)It's when you have IIIII.FFF, but then want to move to III.FFFFF, for smaller sets of values, or IIIIIIII00000, for larger sets of values, that fixed point falls on its face, just like its integer ancestors, regardless of base.
It doesn't take thousands. 1/(a - b), if a and b are close -- oops!
I think a point could be made that the worst that could happen is visual artifacts, and probably only for a few pixels, and probably only for a single frame. Pretty harmless.
DP and ECC correct totally different types of errors, so it's strange to say one is useless without the other. There's certainly classes of problems that don't require ECC but simply aren't practical to solve using single precision. Not that it's been well adapted to GPUs yet, but linear programming for example.
Both are irrelevant for gaming though yeah.
This algorithm returns a very very high number if a and b are close. Can you think of a case in a gaming graphics shader scenario where:
1) this is a useful calculation to perform?
and
2) it is very important to know exactly which very high number is returned?
In practice, fixed point numbers are still quite varied, though Decimal-like, BCD (just for its historical use), and bignum types dominate non-embedded use; while floating point basically means IEEE754 or a bastardization of it.And 1.2 in floating point can be represented, or not, depending. The thing both of those depend on is what the base is, not the fixedness of the radix point.
I did just that, going through some of Project Euler (a fixed width number with ANSI characters for digits, though variable total digits...the epitome of efficiency!). I never could get the hang of an efficient divide, but I was able to implement enough shortcuts to make it fast enough in non-JITed JS and Lua on a PIII, that even the problems I didn't know the right clever tricks for only took some seconds(Especially if you try to implement them yourself, then you're doomed.)
It's not about "exactly". The number will be very wrong. In fact, the division doesn't matter for that; the error was introduced by the addition operation. Division just made it more noticeable.
good reply.i actually bought a 780 a few months ago (and i would've returned it except only i would have had nothing until i could've saved up $480 more for a titan) thinking it didnt have a fuse in it that cripples DP... that's just low class and not really necessary of nvidia to do that... it know it helped them make more money, but considering they wont open up their drivers much due to IP and that they have no competition (except maybe AMD, but AMD has always neglected image quality and extra features ever since R300 if not before then), it was rather amoral.
additionally, i am worried about price ceilings on processors of all kinds and that would be more favorable to intel, nv, and amd than IP repeal. corruption goes up as power is more consolidated and to get rid around that one needs to not be offered any IP, other regulations, or subsidies and then these things could be designed, made, and shipped out a place no larger than an 8 car garage. not saying that would work best for me if i was a businessman, but then i am not trying to be a businessman yet.
[mods: i dont mind this thread being closed now since i got good answers and may have just derailed it; although i wish i had the self-control to never make another DP thread again lol]
No. The number isn't very wrong after the addition/subtraction. If the base units for a and b are in miles, then the rounding after the after the addition/subtraction representing these distances in single precision floats will be in inches. In a given calculation where the units of the original a and b values being compared are significant, the error after a-b will not be significant because it is many orders of magnitude smaller. The exponent bit will become so small that any addition, subtraction or multiplication of the results will practically treat a-b as zero.
why is 32bit single precision? why not 16bit or something? does this have to do with IEEE setting some kind of standard?
Sounds like someone is trying to do some soul searching on whether they should buy a GTX Titan or a GTX 780ti.... double the vram with double precision, or more cores and better memory bandwidth (but half the vram) and higher "today" performance.
good reply.i actually bought a 780 a few months ago (and i would've returned it except only i would have had nothing until i could've saved up $480 more for a titan) thinking it didnt have a fuse in it that cripples DP... that's just low class and not really necessary of nvidia to do that... it know it helped them make more money, but considering they wont open up their drivers much due to IP and that they have no competition (except maybe AMD, but AMD has always neglected image quality and extra features ever since R300 if not before then), it was rather amoral.
Using 64bit z-buffer (or 48Z/16S) would be easy way out of most z-fighting errors, but it would be costly and I'm pretty sure no GPU currently supports the format. (although coming 'dx12?' parts might fix this.)
It would also mean quite big increase in needed bandwidth.
The Titan and K6000 offer several times the DP performance of the GTX 780 Ti, yet offer about the same SP performance, and they use the same chip, with similar clocks and core counts. Why do think it's not a "switch"?There's no switch inside the GTX 780 chip that makes it slower at double precision math. That's like saying The GTX 580 is slower than the 780 because of a switch.
The Titan and K6000 offer several times the DP performance of the GTX 780 Ti, yet offer about the same SP performance, and they use the same chip, with similar clocks and core counts. Why do think it's not a "switch"?
Buying a Titan won't make a difference because games are still programmed using 32-bit floats.
No, it's not. They even admit that much: they make the 780 Ti, Titan, K6000, K20, K20X, and K40 from the GK110. Every spec but DP performance matches that.It's a completely different chip.
It's the same cost, being the same die. There may be added costs after cutting the die out for the Quadro and Tesla versions, but it's the same until then.Die space isn't free
Sure they would, to make more money from that 1%, by serving them specially-tailored products using the same core designs as the cheaper consumer versions. It's common practice. What do you think makes a Xeon E3, compared to a Core i5? Same thing. They have a vested interest in potential customers of those not opting for gaming versions of the same chip, should they not need the entirety of the Quadro or Tesla feature set. The Titan itself was an anomaly.and they aren't going to put all that R&D in designing a product that 99% of their customers won't use.
Furthermore single precision only has about a 10^6 precision.
Cerb said:In practice, fixed point numbers are still quite varied, though Decimal-like, BCD (just for its historical use), and bignum types dominate non-embedded use; while floating point basically means IEEE754 or a bastardization of it..
It used to be 16-bit back in the day, like the Geforce 2 era, but 32-bit is more standardized.
Z-buffers use int, not float.
It's always been included with the likes of Decimal and currency types since way back starting CS classes ages ago, IME,.I've never before seen someone use the term fixed point by itself to mean anything other than a plain old integer with an implied decimal point at some base-2 bit position.
It's a completely different chip. Die space isn't free and they aren't going to put all that R&D in designing a product that 99% of their customers won't use.
Are you sure about this?
I don't believe the Titan and 780ti have completely different chips... I'm pretty sure it's the same foundation with different features enabled/disabled.
Financially it wouldn't make sense to design two entirely different GPU's that mirror each other in nearly every way possible.
In fact, no way Nvidia did this, it would cost them hundreds of millions of dollars.
Rather, they design one GPU with many features and have several variations fabbed for them.
The Titan and 780Ti use the same chip. With the 780ti, they just enabled the previously disabled SMX which now gives it another 192 cores and (I think) 19 more shaders, cut the VRAM in half, upped the clock speed thanks to a slightly new cooler (though resembles the Titan cooler), added DX11.2 support, and reduced DP to 1/24 from 1/3.
For all intents and purposes, they could have just kept everything intact, enabled the extra SMX, added DX11.2 support, and just called it a Titan 2. Though I believe they saw the Titan was too expensive and outside the reach of most gamers wallets, so while cranking up the horsepower, they had to neuter it in some areas that would be less of interest to a gamer to get the price down, and to still give relevant value of the current Titan lineup to those still wanting to drop $1200 per card.
Which is why I said the Titan should have never been labelled as a gaming card period, and just a value Quadro-series card. Though it'd had been a tough sell for those wanting to do compute on non-ECC ram of the Titan (and if so, risky despite its DP performance), it'd still be great for media/video/modeling/graphics design where ECC doesn't play such a significant role as quantity of VRAM compared to the Quadro line pricing.