• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Decimal vs Binary Prefixes

kevinsbane

Senior member
Do you use a decimal megabyte (10^6 bytes), or a binary megabyte (2^20 bytes)? Why?

So for those who insist on binary prefixes, if Windows switched to a decimal kilobyte/megabyte/gigabyte/terabyte, would you still care to use traditional JEDEC binary prefixes for your bytes? (I was going to say bits and bytes, but no one uses binary prefixes for bits! A gigabit connection = 10^9 bits/second, SATA II = 3x10^9 bits/sec, etc.)

One more thing; the official binary prefixes - mebibyte, kibibytes, gibibytes; I think they sound terrible. If we switch to using a decimal megabyte, then when would we ever need to use a binary mebibyte?
 
I am programmer, my world is powers of 2. There are many many things that only work properly or well near those boundaries. The computer fundamentally is based on binary maths and its what I need. For making things easy for users they can throw out decimal megabytes and change the standard to be mebibyte and other rubbish all they want but its not going to change how an expert uses the machine one tiny bit.

From a practical perspective I don't think people without knowledge of computers understand one way or the other anyway. I have family that doesn't know the difference between RAM and drive space so the finer points of the difference between decimal megabytes and binary megabytes are completely irrelevant to them. But to me its a number in the format of the machine itself.
 
Back
Top