I often find myself explaining the same things in real life and online, so I recently started writing technical blog posts.
This one is about why it was a mistake to call 1024 bytes a kilobyte. It’s about a 20min read so thank you very much in advance if you find the time to read it.
Feedback is very much welcome. Thank you.
Weird numbering system? Things are still stored in blocks of 8 bits at the end, it doesn’t matter.
When it gets down to what matter on hard drives, every byte still uses 8 bits, and all other numbers for people actually working with computer science that matter are multiples of 8, not 10.
And because all internal systems use base 8, base 10 is “slower” (not that it matters any longer.