Posted by surprisetalk 3 days ago
So the "sane" options would be either not using SI for digital, or, what was chosen, change the colloquial prefixes in the digital world. The former would have been easier in the short term.
But really!?
I'll keep calling it in nice round powers of two, thank you very much.
I would argue fruit and fruit are two words, one created semasiologically and the other created onomasiologically. Had we chosen a different pronunciation for one of those words, there would be no confusion about what fruits are.
[0] - https://en.wikipedia.org/wiki/Fruit#Botanical_vs._culinary
This is such a basic and universal part of language, it is a mystery to me why something so transparently clueless as "actually, tomato is a fruit" persists.
Or...
Knowledge is understanding that ketchup is tomato jelly. Wisdom is refraining from putting it on your peanut butter and jelly sandwich.
How is it a jelly? It lacks any defining feature of jelly.
Ketchup has essentially all the key defining features of a jelly, technically, just is more fibrous / opaque and savoury than most typical jellies.
But, of course, calling a ketchup "jelly", due to such technical arguments, is exactly as dumb as saying "ayktually, tomato is a fruit": both are utterly clueless to how these words are actually used in culinary contexts.
All they had to say was that the KiB et. al. were introduced in 1998, and the adoption has been slow.
And not “but a kilobyte can be 1000,” as if it’s an effort issue.
In my mind base 10 only became relevant when disk drive manufacturers came up with disks with "weird" disk sizes (maybe they needed to reserve some space for internals, or it's just that the disk platters didn't like powers of two) and realised that a base 10 system gave them better looking marketing numbers. Who wants a 2.9TB drive when you can get a 3TB* drive for the same price?
Three binary terabytes i.e. 3 * 2^40 is 3298534883328, or 298534883328 more bytes than 3 decimal terabytes. The latter is 298.5 decimal gigabytes, or 278 binary gigabytes.
Indeed, early hard drives had slightly more than even the binary size --- the famous 10MB IBM disk, for example, had 10653696 bytes, which was 167936 bytes more than 10MB --- more than an entire 160KB floppy's worth of data.
That is to say, all the (high-end/“gamer”) consumer SSDs that I’ve checked use 10% overprovisioning and achieve that by exposing a given number of binary TB of physical flash (e.g. a “2TB” SSD will have 2×1024⁴ bytes’ worth of flash chips) as the same number of decimal TB of logical addresses (e.g. that same SSD will appear to the OS as 2×1000⁴ bytes of storage space). And this makes sense: you want a round number on your sticker to make the marketing people happy, you aren’t going to make non-binary-sized chips, and 10% overprovisioning is OK-ish (in reality, probably too low, but consumers don’t shop based on the endurance metrics even if they should).
Its been well over a decade now and neither I nor anyone I know has ever had an SSD endurance issue. So it seems like the type of problem where you should just go enterprise if you have it.
TLC flash actually has a total number of bits that's a multiple of 3, but it and QLC are so unreliable that there's a significant amount of extra bits used for error correction and such.
SSDs haven't been real binary sizes since the early days of SLC flash which didn't need more than basic ECC. (I have an old 16MB USB drive, which actually has a user-accessible capacity of 16,777,216 bytes. The NAND flash itself actually stores 17,301,504 bytes.)
They communicate via the network, right? And telephony has always been in base 10 bits as opposed to base two eight bit bytes IIUC. So these two schemes have always been in tension.
So at some point the Ki, Mi, etc prefixes were introduced along with b vs B suffixes and that solved the issue 3+ decades ago so why is this on the HN front page?!
A better question might be, why do we privilege the 8 bit byte? Shouldn't KiB officially have a subscript 8 on the end?
I found some search results about Texas Instruments' digital signal processors using 16-bit bytes, and came across this blogpost from 2017 talking about implementing 16-bit bytes in LLVM: https://embecosm.com/2017/04/18/non-8-bit-char-support-in-cl.... Not sure if they actually implemented it, but that was surprising to me that non octet bytes still exist, albeit in a very limited manner.
Do you know of any other uses for bytes that are not 8 bits?
For "bytes" as the term-of-art itself? Probably not. For "codes" or "words"? 5 bits are the standard in Baudot transmission (in teletype though). 6- and 7-bit words were the standards of the day for very old computers (ASCII is in itself a 7-bit code), especially on DEC-produced ones (https://rabbit.eng.miami.edu/info/decchars.html).
NXP makes a number of audio DSPs with a native 24 bit width.
Microchip still ships chips in the PIC family with instructions of various widths including 12 and 14 bit however I believe the data memory on those chips is either 8 or 16 bit. I have no idea how to classify a machine where the instruction and data memory widths don't match.
Unlike POSIX, C merely requires that char be at least 8 bits wide. Although I assume lots of real world code would break if challenged on that particular detail.
First, you implicitly assumed a decimal number base in your comment.
Second: Of course its meaningful. It's also relevant since humans use binary computers and numeric input and output in text is almost always in decimal.
Okay, but what do you mean by “10”?
A little late to lawyer that...
Elsewhere you write
> They are definitely denying the importance of 2-fold partitioning in computing architectures.
No, they definitely aren't. There are no words in the article that deny anything at all.
Before the patent on Phillips screws & tools expired, Pozidriv was launched which was different enough to be capable of a bit more torque.
Phillips was for mass-production, Posidriv for mass-production with a little more torque.
Lots of people who wanted that still waited until the Pozidriv patent expired before considering it.
The screws themselves are marked on the head with little ticks so you can tell the difference, but not necessarily the screwdrivers :\
It's good to have the right tool for the job, HP instruments used Posidriv in a number of places.
Also If you open major Linux distro task managers, you'll be surprised to see that they often show in decimal units when "i" is missing from the prefix. Many utilities often avoid the confusing prefixes "KB", "MB"... and use "KiB", "MiB"...
Why do you keep insisting the author is denying something when the author clearly acknowledges every single thing you're complaining about?
So please don't mischaracterize articles in the future simply because you disagree with their conclusions. That's misrepresentation, and essentially straight-up lying.
It's really not all that crazy of a situation. What bothers me is when some applications call KiB KB, because they are old or lazy.
I keep using "K" for kilobyte because it makes the children angry since they lack the ability to judge meaning from context.
It should be "kelvin" here. ;)
Unit names are always lower-case[1] (watt, joule, newton, pascal, hertz), except at the start of a sentence. When referring to the scientists the names are capitalized of course, and the unit symbols are also capitalized (W, J, N, Pa, Hz).
[1] SI Brochure, Section 5.3 "Unit Names" https://www.bipm.org/documents/20126/41483022/SI-Brochure-9-...
I think the author had it just right. There's a lot of inertia, but the traditional way can cause confusion.
* Yeah, I read the article. Regardless of the IEC's noble attempt, in all my years of working with people and computers I've never heard anyone actually pronounce MiB (or write it out in full) as "mebibyte".
Sectors per track or tracks per side is subject to change. Moreover a different filesystem may have non-linear growth of the MFT/superblock that'll have a different overhead.
It is worse of a downer when there is a complete failure to make further sense like that, but I'll try to do something.
Of course one chart does not an expert make, I don't understand half of it but at least I worked with 3.5 floppies since they first came out.
3.5 floppies are "soft sectored" media and usually the drives were capable of handling non-standard arrangements too. What made non-standard numbers of sectors uncommon was it would require software most people were not using. DOS and Windows simply prepared virgin magnetic media with 2880 sectors, or reformatted them that way and that was about it.
PC's were already popular when 3.5 size came out, and most of the time they were not virgin magnetic media, they were purchased pre-formatted with 2880 sectors (of 512 bytes per sector) already on the entire floppy, of which fewer sectors were available for user data because a number of sectors are used up by the FAT filesystem overhead.
On the chart you see the 1440kb designation since each sector is considered 1/2 "kilobyte".
512 bytes is pretty close to half a kilobyte ain't it?
(The oddball 1680kb and 1720kb were slightly higher-density sectors, with more of them squeezed into the same size media, most people couldn't easily copy them without using an alternative to DOS or Windows. Sometimes used for games or installation media.)
With Windows when partitioning your drive if you want a 64 GB volume you would likely choose 64000 MB in either the GUI or Diskpart. Each of these GB is exactly 2880000 sectors for some reason ;)
But that's the size of the whole physical partition whether it contains only zeros or a file system. Then when you format it the NTFS filesystem has its own overhead.
It doesn't matter. "kilo" means 1000. People are free to use it wrong if they wish.
“Kilo” can mean what we want in different contexts and it’s really no more or less correct as long as both parties understand and are consistent in their usage to each other.
It's also stupid because it's rare than anyone outside of programming even needs to care exactly how many bytes something else. At the scales that each of kilobyte, megabyte, gigabyte, terabyte etc are used, the smaller values are pretty much insignificant details.
If you ask for a kilogram of rice, then you probably care more about that 1kg of rice is the same as the last 1kg of rice you got, you probably wouldn't even care how many grams that is. Similarly, if you order 1 ton of rice, you do care exactly how many grams it is, or do you just care that this 1 ton is the same as that 1 ton?
This whole stupidity started because hard disk manufacturers wanted to make their drives sound bigger than they actually were. At the time, everybody buying hard disks knew about this deception and just put up with it. We'd buy their 2GB drive and think to ourselves, "OK so we have 1.86 real GB". And that was the end of it.
Can you just imagine if manufacturers started advertising computers as having 34.3GB of RAM? Everybody would know it was nonsense and call it 32GB anyway.
Before all that nonsense, it was crystal clear: a megabyte in storage was unambiguously 1024 x 1024 bytes --- with the exception of crooked mass storage manufacturers.
There was some confusion, to be sure, but the partial success of attempt to redefine the prefixes to their power-of-ten meanings has caused more confusion.
We agree to meaning to communicate and progress without endless debate and confusion.
SI is pretty clear for a reason.
We decidedly do not do that. There's a whole term for new terms that arbitrarily get injected or redefined by new people: "slang". I don't understand a lot of the terms teenagers say now, because there's lots of slang that I don't know because I don't use TikTok and I'm thirty-something without kids so I don't hang out with teenagers.
I'm sure it was the same when I was a teenager, and I suspect this has been going on since antiquity.
New terms are made up all the time, but there's plenty of times existing words get redefined. An easy one, I say "cool" all the time, but generally I'm not talking about temperature when I say it. If I said "cool" to refer to something that I like in 1920's America, they would say that's not the correct use of the word.
SI units are useful, but ultimately colloquialisms exist and will always exist. If I say kilobyte and mean 1024 bytes, and if the person on the other end knows that I mean 1024 bytes, that's fine and I don't think it's "nihilistic".
https://en.wikipedia.org/wiki/Language_planning
(Then you could decide what you think about language planning.)
I'm pretty sure any linguist will agree with this definition. All language normalisation is an afterthought.
Fair enough.
1000 watts is a kilowatt
1000 hertz is a kilohertz
1000 metres is a kilometre
1000 litres is a kilolitre
1000 joules is a kilojoule
1000 volts is a kilovolt
1000 newtons is a kilonewton
1000 pascals is a kilopascal
1024 bytes is a kilobyte, because that's what we're used to and we don't want to change to a new prefix
> All words are made up.
Yes, and the made up words of kilo and kibi were given specific definitions by the people who made them up:
* https://en.wikipedia.org/wiki/Metric_prefix
* https://en.wikipedia.org/wiki/Binary_prefix
> […] as long as both parties understand and are consistent in their usage to each other.
And if they don't? What happens then?
Perhaps it would be easier to use the words definitions as they are set up in standards and regulations so context is less of an issue.
Kilo was generally understood to mean one thousand long before it was adopted by a standards committee. I know the French love to try and prescribe the use of language, but in most of the world words just mean what people generally understand them to mean; and that meaning can change.
Good for them. People make up their own definitions for words all the time. Some of those people even try to get others to adopt their definition. Very few are ever successful. Because language is about communicating shared meaning. And there is a great deal of cultural inertia behind the kilo = 2^10 definition in computer science and adjacent fields.
Can’t use a dictionary, those bastards try to get us to adopt their definitions.
Inability to communicate isn't what we observe because as I already stated, meaning is shared. Dictionaries are one way shared meaning can be developed, as are textbooks, software source codes, circuits, documentation, and any other artifact which links the observable with language. All of that being collectively labeled culture. The mass of which I analogized with inertia so as to avoid oversimplifications like yours.
My point is that one person's definition does not a culture, make. And that adoption of new word definitions is inherently a group cultural activity which requires time, effort, and the willingness of the group to participate. People must be convinced the change is an improvement on some axis. Dictation of a definition from on high is as likely to result in the word meaning the exact opposite in popular usage as not. Your comment seems to miss any understanding or acknowledgement that a language is a living thing, owned by the people who speak it, and useful for speaking about the things which matter most to them. That credible dictionaries generally don't accept words or definitions until widespread use can be demonstrated.
It seems like some of us really want human language to work like rule-based computer languages. Or think they already do. But all human languages come free with a human in the loop, not a rules engine.
(And by that I mean "what the fuck, no...")
If you're talking loosely, then you can get away with it.
That being said, I think the difference between mib and mb is niche for most people
90 mm floppy disks. https://jdebp.uk/FGA/floppy-discs-are-90mm-not-3-and-a-half-...
Which I have taken to calling 1440 KiB – accurate and pretty recognizable at the same time.
That page is part right and part wrong.
It is right in claiming that "3.5-inch" floppies are actually 90 mm.
It is wrong in claiming that the earlier "5.25-inch" floppies weren't metric
"5.25-inch" floppies are actually 130 mm as standardised in ECMA-78 [0]
"8-inch" floppies are actually 200 mm as standardised in ECMA-69 [1]
Actually there's a few different ECMA standards for 130 and 200 mm floppies – the physical dimensions are the same, but using different recording mechanisms (FM vs MFM–those of a certain age may remember MFM as "double density", and those even older may remember FM as "single density"), and single-sided versus double-sided.
[0] ECMA-78: Data interchange on 130 mm flexible disk cartridges using MFM recording at 7 958 ftprad on 80 tracks on each side), June 1986: https://ecma-international.org/publications-and-standards/st...
[1] ECMA-69: Data interchange on 200 mm flexible disk cartridges using MFM recording at 13 262 ftprad on both sides, January 1981: https://ecma-international.org/publications-and-standards/st...
Donald Knuth himself said[1]:
> The members of those committees deserve credit for raising an important issue, but when I heard their proposal it seemed dead on arrival --- who would voluntarily want to use MiB for a maybe-byte?! So I came up with the suggestion above, and mentioned it on page 94 of my Introduction to MMIX. Now to my astonishment, I learn that the committee proposals have actually become an international standard. Still, I am extremely reluctant to adopt such funny-sounding terms; Jeffrey Harrow says "we're going to have to learn to love (and pronounce)" the new coinages, but he seems to assume that standards are automatically adopted just because they are there.
If Gordon Bell and Gene Amdahl used binary sizes -- and they did -- and Knuth thinks the new terms from the pre-existing units sound funny -- and they do -- then I feel like I'm in good company on this one.
0: https://honeypot.net/2017/06/11/introducing-metric-quantity....
> I'm a big fan of binary numbers, but I have to admit that this convention flouts the widely accepted international standards for scientific prefixes.
He also calls it “an important issue” and had written “1000 MB = 1 gigabyte (GB), 1000 GB = 1 terabyte (TB), 1000 TB = 1 petabyte (PB), 1000 PB = 1 exabyte (EB), 1000 EB = 1 zettabyte (ZB), 1000 ZB = 1 yottabyte (YB)” in his MMIX book even before the new binary prefixes became an international standard.
He is merely complaining that the new names for the binary prefixes sound funny (and has his own proposal like “large megabyte” and notation MMB etc), but he's still using the kilo/mega/etc prefixes with decimal meanings.
Edit: Disregard the metric bit but I think the rest still stands.
Ummm, what? https://en.wikipedia.org/wiki/Metric_prefix
No, they already did the opposite with KiB, MiB.
Because most metric decimal units are used for non-computing things. Kilometers, etc. Are you seriously proposing that kilometers should be renamed kitrimeters because you think computing prefixes should take priority over every other domain of science and life?
It would be annoying of one frequently found themselves calculating gigabytes per hectare. I don't think I've ever done that. The closest I've seen is measure magnetic tape density where you get weird units like "characters per inch", where neither "character" nor "inch" are the common units for their respective metrics.
E.g. Macs measure file sizes in powers of 10 and call them KB, MB, GB. Windows measures file sizes in powers of 2 and calls them KB, MB, GB instead of KiB, MiB, GiB. Advertised hard drives come in powers of 10. Advertised memory chips come in powers of 2.
When you've got a large amount of data or are allocating an amount of space, are you measuring its size in memory or on disk? On a Mac or on Windows?
Especially that it was only partially successful.
Which is not to say that there had been zero confusion; but it was only made worse.
Things like hard drives often used decimal/metric sizing from the start. Because their capacity has always been based on physical platter size and density, not powers of two the way memory is.
So this confusion has been with computing since the beginning. The attempt to introduce units like KiB isn't revisionism, it's an attempt at clarity around something that has always been ambiguous.
And obviously, if you need two separate prefixes, you're going to change the one whose unit of measurement differs from all the rest of science and technology.
Yes it is; it is literally asking people who call 1024 bytes "kilobyte" to stop doing that and say "kibibyte" instead, and to revise the meaning of "kilobyte" to 1000 bytes.
Some people have not stopped doing that, so there is more confusion now. You no longer know whether a fellow engineer is using powers of 1000 or powers of 1024 when using kilobyte, megabyte or gigabyte; it depends on whether they took the red pill or the blue pill.
> You no longer know whether a fellow engineer is using powers of 1000 or powers of 1024 when using kilobyte, megabyte or gigabyte
You never knew this, that's the point. You didn't know it in e.g. 1990, before KiB was introduced in 1998. People didn't only start using powers of 10 once KiB was formally introduced. They'd always used them, but conventions around powers of 10 vs 2 depended greatly on the computing context, and were frequently confusing.
There isn't more confusion now. Fortunately, places that explicitly state KiB result in less confusion because, at least in that case, you know for sure what it is.
Unfortunately, a lot of people won't get on board with it, so the confusion persists.
And frankly, I don't care what you call it when you're speaking, as long as you just use the right label in software and in tech specs.
False: source, I was there. Kilobyte and megabyte were powers of 1024, except in well-delineated circumstances (mass storage devices).
The size labeling of mass storage devices was widely reviled due to using a weasly definition of terms that everyone normally undestood to be powers of 1024.
> a lot of people won't get on board with it, so the confusion persists.
The idea that people refusing to change their behavior according to someone's wishes are causing confusion is fallacious.
Of course it's those introducing change that are introducing confusion.
The kibi-mebi people failed to predict human behavior; that they cannot just roll out a vocabulary change to all of humanity the way you roll out a new kernel throughout a machine cluster.
The irony is that you can even find people who were not born at the time, who are using kilobyte to mean 1024 bytes.
Why don't you take a look at Wikipedia which clearly describes the many, many, many places in which powers-of-10 is used, and then also has a section on powers-of-2:
https://en.wikipedia.org/wiki/Byte#Units_based_on_powers_of_...
Remember, it wasn't just hard drives either. It's been data transfer speeds, network speeds, tape capacities, etc. There's an awful lot of stuff in computing that doesn't inherently depend on powers of 2 for its scaling.
And so as long as we have both units and will always have both units, it makes sense to give them different names. And, obviously, the one that matches the SI system should have the same name as it. Can you seriously disagree? Again, I don't care what you say in conversation. But in labels and specifications, how can you argue against it?
And in most cases, using 1024 is more convenient because the sizes of page sizes, disk sectors, etc. are powers of 2.
That doesn't conform to SI. It should be written as kB mB gB. Ambiguity will only arise when speaking.
> Advertised hard drives come in powers of 10.
Mass storage (kB) has its own context at this point, distinct from networking (kb/s) and general computing (KB).
> When you've got a large amount of data or are allocating an amount of space, ...
You aren't speaking but are rather working in writing. kb, kB, Kb, and KB refer to four different unit bit counts and there is absolutely zero ambiguity. The only question that might arise (depending on who you ask) is how to properly verbalize them.
Little m is milli, big M is mega. Little g doesn’t exist, only big G.
Note that no one is going to confuse mB for millibytes because what would that even mean? But also in practice MB versus Mb aren't ambiguous because except for mass storage no one mixes bytes with powers of ten AFAIK.
And let's take a minute to appreciate the inconsistency of (SI) km vs Mm. KB to GB is more consistent.
Data compression. For example, look at http://prize.hutter1.net/ , heading "Contestants and Winners for enwik8". On 23.May'09, Alex's program achieved 1.278 bits per character. On 4.Nov'17, Alex achieved 1.225 bits per character. That is an improvement of 0.053 b/char, or 53 millibits per character. Similarly, we can talk about how many millibits per pixel JPEG-XL is better than classic JPEG for the same perceptual visual quality. (I'm using bits as the example, but you can use bytes and reach the same conclusion.)
Just because you don't see a use for mB doesn't mean it's open for use as a synonym of MB. Lowercase m means milli-, as already demonstrated in countless frequently used units - millilitre, millimetre, milliwatt, milliampere, and so on.
In case you're wondering, mHz is not a theoretical concept either. If you're generating a tone at say 440 Hz, you can talk about the frequency stability in millihertz of deviation.
> Just because you don't see a use for mB doesn't mean it's open for use as a synonym of MB.
At the end of the day it's all down to convention. We've never needed approval from a standards body to do something. Standards are useful to follow when they provide a tangible benefit; following them for their own sake to the detriment of something immediately practical is generally a waste of time and effort.
I don't believe I hallucinated unit notations such as mB and gB. Unfortunately I don't immediately recall where I encountered their use.
> In case you're wondering, mHz is not a theoretical concept either.
Just to be clear, I was not meaning to suggest that non-SI prefixes be used for quantifying anything other than bits. SI standardized prefixes are great for most things.
I gave some examples in my post https://blog.zorinaq.com/decimal-prefixes-are-more-common-th...
Storage capacity also uses binary prefixes. The distinction here isn't that file sizes are reported in binary numbers and storage capacity is reported in decimal numbers. It's that software uses binary numbers and hard drive manufacturers use decimal numbers. You don't see df reporting files in binary units and capacities in decimal units.
Of that large list of measurements, only bandwidth is measured in bytes, making the argument mostly an exercise in sophistry. You can't convince anyone that KB means 1000 bytes by arguing that kHz means 1000 Hz.
https://en.wikipedia.org/wiki/Mile#Roman
https://en.wikipedia.org/wiki/Ancient_Roman_units_of_measure...
I disagreed strongly - I think X-per-second should be decimal, to correspond to Hertz. But for quantity, binary seems better. (modern CS papers tend to use MiB, GiB etc. as abbreviations for the binary units)
Fun fact - for a long time consumer SSDs had roughly 7.37% over-provisioning, because that's what you get when you put X GB (binary) of raw flash into a box, and advertise it as X GB (decimal) of usable storage. (probably a bit less, as a few blocks of the X binary GB of flash would probably be DOA) With TLC, QLC, and SLC-mode caching in modern drives the numbers aren't as simple anymore, though.
There are probably cases where corresponding to Hz, is useful, but for most users I think 119MiB/s is more useful than 1Gbit/s.
Interestingly, from 10GBit/s, we now also have binary divisions, so 5GBit/s and 2.5GBit/s.
Even at slower speeds, these were traditionally always decimal based - we call it 50bps, 100bps, 150bps, 300bps, 1200bps, 2400bps, 9600bps, 19200bps and then we had the odd one out - 56k (actually 57600bps) where the k means 1024 (approximately), and the first and last common speed to use base 2 kilo. Once you get into MBps it's back to decimal.
56000 BPS was the bitrate you could get out of a DS0 channel, which is the digital version of a normal phone line. A DS0 is actually 64000 BPS, but 1 bit out of 8 is "robbed" for overhead/signalling. An analog phone lined got sampled to 56000 BPS, but lines were very noisy, which was fine for voice, but not data.
7 bits per sample * 8000 samples per second = 56000, not 57600. That was theoretical maximum bandwidth! The FCC also capped modems at 53K or something, so you couldn't even get 56000, not even on a good day.
I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.
Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.
RAM had binary sizing for perfectly practical reasons. Nothing else did (until SSDs inherited RAM's architecture).
We apply it to all the wrong things mostly because the first home computers had nothing but RAM, so binary sizing was the only explanation that was ever needed. And 50 years later we're sticking to that story.
- magnetic media
- optical media
- radio waves
- time
There's good reasons for having power-of-2 sectors (they need to get loaded into RAM), but there's really no compelling reason to have a power-of-2 number of sectors. If you can fit 397 sectors, only putting in 256 is wasteful.
The choice would be effectively arbitrary, the number of actual bits or bytes is the same regardless of the multiplier that you use. But since it's for a computer, it makes sense to use units that are comparable (e.g. RAM and HD).
Just later, some marketing assholes thought they could better sell their hard drives when they lie about the size and weasel out of legal issues with redefining the units.
The decimal-vs-binary discrepancy is used more as slack space to cope with the inconvenience of having to erase whole 16MB blocks at a time while allowing the host to send write commands as small as 512 bytes. Given the limited number of program/erase cycles that any flash memory cell can withstand, and the enormous performance penalty that would result from doing 16MB read-modify-write cycles for any smaller host writes, you need way more spare area than just a small multiple of the erase block size. A small portion of the spare area is also necessary to store the logical to physical address mappings, typically on the order of 1GB per 1TB when tracking allocations at 4kB granularity.
(The old excuse was that networks are serial but they haven't been serial for decades.)
the same and even more confusion is engendered when talking about "fifths" etc.
Call me calcitrant, reactionary, or whatever, but I will not say kibibyte out loud. It's a dumb word and I'm not using it. It was a horrible choice.
https://en.wikipedia.org/wiki/Byte#Multiple-byte_units
"the C64 took its name from its 64 kilobytes (65,536 bytes) of RAM"
"I bought a two tib SSD."
"I just want to serve five pibs."
no you didn't, that doesn't exist, you bought 2 trillion bytes, 99 billion bytes short
Perhaps we can simplify this compromise and have a kilobyte as 1024 bytes, a megabyte as 1024000 bytes, a gigabyte as 1048576000 bytes and a terabyte as 1048576000000 bytes.
Sometimes, other systems just make more sense.
For example, for time, or angles, or bytes. There are properties of certain numbers (or bases) that make everything descending from them easier to deal with.
for angles and time (and feet): https://en.wikipedia.org/wiki/Superior_highly_composite_numb...
For other problems we use base 2, 3, 8, 16, or 10.
Must we treat metric as a hammer, and every possible problem as a nail?
The ancient Sumerians used multiples of 60, as we continue to do for time and angles (which are related) today. It makes perfect sense. 60 is divisible by 2, 3, 4, 5, and 6, which makes it easy to use in calculations. Even the metric people are not so crazy as to propose replacing these with powers of 10.
Same with pounds, for example. A pound is 16 ounces, which can be divided 4 times without involving any fractions. Try that with metric.
Then there's temperature. Fahrenheit just works more naturally over the human-scale temperature range without involving fractions. Celsius kind of sucks by comparison.
Not sure if you're actually serious... 1 kg is 1000 g, dividing with 4 gets you 250 g, no fractions. And no need to remember arbitrary names or numbers for conversions.
> Then there's temperature. Fahrenheit just works more naturally over the human-scale temperature range without involving fractions. Celsius kind of sucks by comparison.
Again, I'm not sure I get it. With celsius, 0°C is freezing temperature of water and 100°C is boiling point of water. For fahrenheit it was something like 32 and 212? And in every day use, people don't need fractions, only full degrees. Celsius also aligns well with Kelvins without fractions (unlike fahrenheit).
But Fahrenheit aligns well with Rankine without fractions (unlike Celsius). [Imagine some symbol here indicating humour.]
IOW each Celsius degree is bigger than each Fahrenheit degree.
Even though the F numbers are so much higher and it seems unbearably hot :)
So for a thermostat that only can be set in 1 degree increments (without a decimal point), you have finer control when using F than using C.
Anybody can memorize the conversion more easily by throwing out the math, using table lookup -- made easier by throwing out most of the table too.
Just remember every 5 C equals a non-fractional F.
And every 5 C equals 9 F.
If all you are interested in is comfort level it's like this:
C F
0 32
5 41
10 50
15 59
20 68
25 77
30 86
35 95
40 104
Least significant digit of F drops by 1 every time without fail.Looks like it increases by 1 each time in the tens column, but it's only 9 so 50 & 59 are the outliers, which most people have memorized already.
If you are a Celsius native and you think in terms of 10, 15, 20, 25, 30 -- you only need to remember 5 different F numbers, 50, 59, 68, 77 & 86 and that will get you far.
Good luck using these as your lottery numbers ;)
So, I appreciate your rendition of things I have tables for already but any actual need is sadly non existant.
Whereas in C, 0 is fine and 100 means you died 50 degrees ago.
However, C is much more useful in industry, where boiling and freezing points are more important.
In the end, it's probably what one is used to. Temperatures here are typically between -20'C and +30'C.
Dividing by four is not the same as dividing four times.
I'm pretty sure that you don't
1000 g, 500 g, 250 g, 125 g
I also don't understand the fear around fractions - we deal with halves, quarters and fifths all the time in the natural world.
Yes, and a certain fast food company found that their 1/3 lb burgers weren't selling well, because their idiot customers can't maff too good and thought 1/4 was bigger than 1/3.
No, they were absolutely that crazy [1]. Luckily the proposal fell through.
And you can go with 120 or, better 210 so you get 7 in.
Pure madness.
Well I guess we already basically have this in practice since Ki can be shortened to K seeing as metric prefixes are always lower case and we clearly aren't talking about kelvin bytes.
The author doesn’t actually answer their question, unless I missed something?
They go on to make a few more observations, and say finally only that the current different definitions are sometimes confusing, to non experts.
I don’t see much of an argument here for changing anything. Some non experts experience minor confusion about two things that are different, did I miss something bigger in this?