Not really on topic, but when thinking about hexadecimal and the powers of 2, the most disgusting thing is the MiB and MB. It really hurts my eyes and my technical heart when i see MiB in an article where MB is the proper way. Bytes are a power of 2. Therefore M is a power of 2.
The prefix M or mega is a SI prefix meaning 10^6. The colloquial use of M or mega as meaning 2^20 came after the date the SI prefix was established. I guess this might be "disgusting" to you as it lacks new age flakery and exceeds a code munkey's understanding.
Therein lies the issue. My take on it is that M means mega which is a million (bytes), end of story.The prefix afcourse is set for a decimal system to 10^6. But everybody who understands binary logic knows that M here means 2^20.
Therein lies the issue. My take on it is that M means mega which is a million (bytes), end of story.
What are you driving at?That means that before the whole issue became apparent and widespread in the news a few years ago, your computer did not have 1024*1024 bytes but 1000*1000 bytes ? Or that your old home computer only had 8,000 bytes instead of 8192 bytes ?
The use of SI units to mean binary numbers was initially only used as a shortcut of convenience when measuring RAM size (because, RAM by its nature tended to be in power-of-2 sizes, due to manufacturing techniques). At that time, this was a satisfactory shortcut, as taking 'k' to mean 1024, was only 2.4% inaccurate - which was tolerated as an acceptable deviation.
It's worth nothing that for communications channels (e.g. serial, parallel, modems, networking) the binary equivalents have never been used. Measures such as 'baud', bps, etc. have always used the SI prefixes to mean decimal numbers - e.g. a 14.4 kbps modem, really did offer 14400 bits per second.
Similarly, early mass storage, such as magnetic tapes and early floppy disks, also only ever used decimal numbers (at least for 'unformatted' capacity). E.g. tapes were marketed as 10 Mbits on a reel - meaning 10 megabits. Of which some would be used as sync/sector markers, etc. giving a reduced formatted capacity.
Once floppy disks and hard disks started to become mainstream - everything went crazy with every company interpreting units in a different way. (Driven in part, by software reporting capacities in binary units). I've seen hard drives with capacity stated in MB where MB has been defined as 1,048,476; 1,024,000 and 1,000,000 bytes.
Now we have GB and TB capacities, the discrepancy between binary and decimal units is becoming intolerable (nearly 10% for TB/TiB) - hence the effort made to try and restandardise the units.
I agree fully. Because of the confusion created in the past between people always thinking in decimals and people occasionally thinking binary or decimal, now a separate system has been created where there is no more possibility to create confusion. But even then confusion will always arise when care is not taken. Because serial data is physically more easy to transfer then parallel, techniques used as error correction, encoding, packets and protocol layers will always use up some of the bandwidth. Snip...
Not sure I understand, or agree with you. Serializing data requires ECC, packetization eg: TCP/IP and more wherein parallel data transfer, although it has multiple bit lines requires only a clock to synchronize the transfer. 8 bit parallel can transmit one byte in one cycle, vastly simpler then serial. This is why perhaps, CPU to memory busses are always parallel. Mark
8 bit parallel can transmit one byte in one cycle, vastly simpler then serial. This is why perhaps, CPU to memory busses are always parallel. Mark