Posted by vishnuharidas 9/12/2025
I don't know if you have ever had to use White-Out to correct typing errors on a typewriter that lacked the ability natively, but before White-Out, the only option was to start typing the letter again, from the beginning.
0x7f was White-Out for punched paper tape: it allowed you to strike out an incorrectly punched character so that the message, when it was sent, would print correctly. ASCII inherited it from the Baudot–Murray code.
It's been obsolete since people started punching their tapes on computers instead of Teletypes and Flexowriters, so around 01975, and maybe before; I don't know if there was a paper-tape equivalent of a duplicating keypunch, but that would seem to eliminate the need for the delete character. Certainly TECO and cheap microcomputers did.
This means that frame numbers in a FLAC file can go up to 2^36-1, so a FLAC file can have up to 68,719,476,735 frames. If it was recorded at a 48kHz sample rate, there will be 48,000 frames per second, meaning a FLAC file at 48kHz sample rate can (in theory) be 14.3 million seconds long, or 165.7 days long.
So if Unicode ever needs to encode 68.7 billion characters, well, extended seven-byte UTF-8 will be ready and waiting. :-D
NOW in hindsight it makes more sense to use UTF-8 but it wasn't clear back 20 years ago it was worth it.
Once enough people accepted that this approach was impractical, UCS-2 was replaced with UTF-16 and surrogate codes. At that point it was clear that UTF-8 was better in almost every scenario because neither had an advantage for random access and UTF-8 was usually substantially smaller.
Storage-wise, UTF-8 is usually better since so much data is ASCII with maybe the occasional accented character. The speed issue only really matters to Windows NT since that was UCS-2 inside, but it wasn't a problem for many.
So, it won't fill up during our lifetime I guess.
Imagine the code points we'll need to represent an alien culture :).
If we ever needed that many characters, yes the most obvious solution would be a fifth byte. The standard would need to be explicitly extended though.
But that would probably require having encountered literate extraterrestrial species to collect enough new alphabets to fill up all the available code points first. So seems like it would be a pretty cool problem to have.
So what would need to happen first would be that unicode decides they are going to include larger codepoints. Then UTF-8 would need to be extended to handle encoding them. (But I don't think that will happen.)
It seems like Unicode codepoints are less than 30% allocated, roughly. So there's 70% free space..
---
Think of these three separate concepts to make it clear. We are effectively dealing with two translations - one from the abstract symbol to defined unicode code point. Then from that code point we use UTF-8 to encode it into bytes.
1. The glyph or symbol ("A")
2. The unicode code point for the symbol (U+0041 Latin Capital Letter A)
3. The utf-8 encoding of the code point, as bytes (0x41)
Because if so: I don't really like that. It would mean that "equal sequence of code points" does not imply "equal sequence of encoded bytes" (the converse continues to hold, of course), while offering no advantage that I can see.
I realize that hindsight is 20/20, and time were different, but lets face it: "how to use an unused top bit to best encode larger number representing Unicode" is not that much of challenge, and the space of practical solutions isn't even all that large.
UTF-8 is the best kind of brilliant. After you've seen it, you (and I) think of it as obvious, and clearly the solution any reasonable engineer would come up with. Except that it took a long time for it to be created.