Comment by shawnz

Comment by shawnz 6 hours ago

2 replies

I don't think I see the vision here. If you want to maximize the number of tokens representable as a prefix code while still being able to output any sequence of characters, how could you possibly pick anything other than the one-character-long tokens?

Are you saying you'd intentionally make some output sequences impossible on the basis they're not likely enough to be worth violating the prefix code for? Surely there's enough common short words like "a", "the", etc that that would be impractical?

And even excluding the cases that are trivially impossible due to having short words as a prefix, surely even the longer words share prefixes commonly enough that you'd never get tokens longer than, say, two characters in the best case? Like, so many words start with "st" or "wh" or "re" or whatever, how could you possibly have a prefix code that captures all of them, or even the most common ones, without it being uselessly short?

akoboldfrying 5 hours ago

> Surely there's enough common short words like "a", "the", etc that that would be impractical?

Tokens don't have to correspond to words. The 2-character tokens "a " and " a" will cover all practical uses of the lowercase word "a". Yes, this does make some strings unrepresentable, such as the single-character string "a", but provided you have tokens "ab", "ba", "ac", "ca", etc., all other strings can be represented. In practice you won't have all such tokens, but this doesn't materially worsen the output provided the substrings that you cannot represent are all low-probability.

  • shawnz 5 hours ago

    Ah yeah, factoring in the whitespace might make this a bit more practical