Comment by baq

Comment by baq a day ago

63 replies

ASCII is very convenient when it fits in the solution space (it’d better be, it was designed for a reason), but in the global international connected computing world it doesn’t fit at all. The problem is all the tutorials, especially low level ones, assume ASCII so 1) you can print something to the console and 2) to avoid mentioning that strings are hard so folks don’t get discouraged.

Notably Rust did the correct thing by defining multiple slightly incompatible string types for different purposes in the standard library and regularly gets flak for it.

craftkiller a day ago

> Notably Rust did the correct thing

In addition to separate string types, they have separate iterator types that let you explicitly get the value you want. So:

  String.len() == number of bytes
  String.bytes().count() == number of bytes
  String.chars().count() == number of unicode scalar values
  String.graphemes().count() == number of graphemes (requires unicode-segmentation which is not in the stdlib)
  String.lines().count() == number of lines
Really my only complaint is I don't think String.len() should exist, it's too ambiguous. We should have to explicitly state what we want/mean via the iterators.
  • pron a day ago

    Similar to Java:

       String.chars().count(), String.codePoints().count(), and, for historical reasons, String.getBytes(UTF-8).length
  • westurner a day ago

      String.graphemes().count()
    
    That's a real nice API. (Similarly, python has @ for matmul but there is not an implementation of matmul in stdlib. NumPy has a matmul implementation so that the `@` operator works.)

    ugrapheme and ucwidth are one way to get the graphene count from a string in Python.

    It's probably possible to get the grapheme cluster count from a string containing emoji characters with ICU?

    • dhosek a day ago

      Any correctly designed grapheme cluster handles emoji characters. It’s part of the spec (says the guy who wrote a Unicode segmentation library for rust).

account42 a day ago

> in the global international connected computing world it doesn’t fit at all.

I disagree. Not all text is human prose. For example, there is nothing wrong with an programming language that only allows ASCII in the source code and many downsides to allowing non-ASCII characters outside string constants or comments.

  • andriamanitra 11 hours ago

    > For example, there is nothing wrong with an programming language that only allows ASCII in the source code and many downsides to allowing non-ASCII characters outside string constants or comments.

    That's a tradeoff you should carefully consider because there are also downsides to disallowing non-ASCII characters. The downsides of allowing non-ASCII mostly stem from assigning semantic significance to upper/lowercase (which is itself a tradeoff you should consider when designing a language). The other issue I can think of is homographs but it seems to be more of a theoretical concern than a problem you'd run into in practice.

    When I first learned programming I used my native language (Finnish, which uses 3 non-ASCII letters: åäö) not only for strings and comments but also identifiers. Back then UTF-8 was not yet universally adopted (ISO 8859-1 character set was still relatively common) so I occasionally encountered issues that I had no means to understand at the time. As programming is being taught to younger and younger audiences it's not reasonable to expect kids from (insert your favorite non-English speaking country) to know enough English to use it for naming. Naming and, to an extent, thinking in English requires a vocabulary orders of magnitude larger than knowing the keywords.

    By restricting source code to ASCII only you also lose the ability to use domain-specific notation like mathematical symbols/operators and Greek letters. For example in Julia you may use some mathematical operators (eg. ÷ for Euclidean division, ⊻ for exclusive or, ∈/∉/∋ for checking set membership) and I find it really makes code more pleasant to read.

  • eviks 13 hours ago

    The "nothing wrong" is, of course, this huge issue of not being able to use your native language, especially important when learning something by avoiding the extra language barrier on top of another language barrier

    Now list anything as important from your list of downsides that's just as unfixable

  • simonask a day ago

    This is American imperialism at its worst. I'm serious.

    Lots of people around the world learn programming from sources in their native language, especially early in their career, or when software development is not their actual job.

    Enforcing ASCII is the same as enforcing English. How would you feel if all cooking recipes were written in French? If all music theory was in Italian? If all industrial specifications were in German?

    It's fine to have a dominant language in a field, but ASCII is a product of technical limitations that we no longer have. UTF-8 has been an absolute godsend for human civilization, despite its flaws.

    • 0x000xca0xfe a day ago

      Well I'm not American and I can tell you that we do not see English source code as imperialism.

      In fact it's awesome that we have one common very simple character set and language that works everywhere and can do everything.

      I have only encountered source code using my native language (German) in comments or variable names in highly unprofessional or awful software and it is looked down upon. You will always get an ugly mix and have to mentally stop to figure out which language a name is in. It's simply not worth it.

      Please stop pushing this UTF-8 everywhere nonsense. Make it work great on interactive/UI/user facing elements but stop putting UTF-8-only restrictions in low-level software. Example: Copied a bunch of ebooks to my phone, including one with a mangled non-UTF-8 name. It was ridiculously hard to delete the file as most Android graphical and console tools either didn't recognize it or crashed.

      • flohofwoe a day ago

        > Please stop pushing this UTF-8 everywhere nonsense.

        I was with you until this sentence. UTF-8 everywhere is great exactly because it is ASCII-compatible (e.g. all ASCII strings are automatically also valid UTF-8 strings, so UTF-8 is a natural upgrade path from ASCII) - both are just encodings for the same UNICODE codepoints, ASCII just cannot go beyond the first 127 codepoints, but that's where UTF-8 comes in and in a way that's backward compatible with ASCII - which is the one ingenious feature of the UTF-8 encoding.

      • sussmannbaka 9 hours ago

        You say this because your native language broadly fits into ascii and you would sing a different tune if it didn’t.

    • jibal a day ago

      It's neither American nor imperialism -- those are both category mistakes.

      Andreas Rumpf, the designer of Nim, is Austrian. All the keywords of Nim are in English, the library function names are in English, the documentation is in English, Rumpf's book Mastering Nim is in English, the other major book for the language, Nim In Action (written by Dominik Picheta, nationality unknown but not American) is in English ... this is not "American imperialism" (which is a real thing that I don't defend), it's for easily understandable pragmatic reasons. And the language parser doesn't disallow non-ASCII characters but it doesn't treat them linguistically, and it has special rules for casefolding identifiers that only recognize ASCII letters, hobbling the use of non-ASCII identifiers because case distinguishes between types and other identifiers. The reason for this lack of handling of Unicode linguistically is simply to make the lexer smaller and faster.

      • rurban 7 hours ago

        > The reason for this lack of handling of Unicode linguistically is simply to make the lexer smaller and faster.

        No, it is actually for security reasons. Once you allow non-ASCII identifiers, identifiers will become non identifiable. Only zig recognized that. Nim allows insecure identifiers. https://github.com/rurban/libu8ident/blob/master/doc/c11.md#...

        • jibal 4 hours ago

          Reading is fundamental. I was referring to the Nim lexer. Obviously the reason that it "allows insecure identifiers" is not "actually for security reasons". It is, as I stated, for reasons of performance ... I know this from reading the code and the author's statements.

      • simonask 7 hours ago

        I mean, the keywords of a programming language have to be in some language (unless you go the cursed route of Excel). I'm arguing against the position that non-ASCII identifiers should be disallowed.

    • account42 a day ago

      Actually, it would be great to have a lingua franca in every field that all participants can understand. Are you also going to complain that biologists and doctors are expected to learn some rudimentary Latin? English being dominant in computing is absolutely a strength and we gain nothing by trying to combat that. Having support for writing your code in other languages is not going to change that most libraries will use English and and most documentation will be in English and most people you can ask for help will understand English. If you want to participate and refuse to learn English you are only shooting yourself in the foot - and if you are going to learn English you may as well do it from the beginning. Also due to the dominance of English and ASCII in computing history, most languages already have ASCII-alternatives for their writing so even if you need to refer to non-English names you can do that using only ASCII.

      • simonask a day ago

        Well, the problem is that what you are advocating is also that knowing Latin would be a prerequisite for studying medicine, which it isn't anywhere. That's the equivalent. Doctors learn a (very limited) Latin vocabulary as they study and work.

        You are severely underestimate how far you can get without any real command of the English language. I agree that you can't become really good without it, just like you can't do haute cuisine without some French, but the English language is a huge and unnecessary barrier of entry that you would put in front of everyone in the world who isn't submerged in the language from an early age.

        Imagine learning programming using only your high school Spanish. Good luck.

    • flohofwoe a day ago

      Calm down, ASCII is a UNICODE compatible encoding for the first 127 UNICODE code points (which maps directly to the entire ASCII range). If you need to go beyond that, just 'upgrade' to UTF-8 encoding.

      UNICODE is essentially a superset of ASCII, and the UTF-8 encoding also contains ASCII as compatible subset (e.g. for the first 127 UNICODE code points, an UTF-8 encoded string is byte-by-byte compatible with the same string encoded in ASCII).

      Just don't use any of the Extended ASCII flavours (e.g. "8-bit ASCII with codepages") - or any of the legacy 'national' multibyte encodings (Shift-JIS etc...) because that's how you get the infamous `?????` or `♥♥♥♥♥` mismatches which are commonly associated with 'ASCII' (but this is not ASCII, but some flavour of Extended ASCII decoded with the wrong codepage).

    • ksenzee a day ago

      I don’t see much difference between the amount of Italian you need for music and the amount of English you need for programming. You can have a conversation about it in your native language, but you’ll be using a bunch of domain-specific terms that may not be in your native language.

      • simonask 7 hours ago

        I agree, but we're talking about identifiers in code you write yourself here. Not the limited vocabulary of keywords, which are easy to memorize in any language. Standard libraries may trip you up, but documentation for those may be available in your native language.

    • nkrisc a day ago

      There was a time when most scientific literature was written in French. People learned French. Before that it was Latin. People learned Latin.

      • tehjoker a day ago

        This is true but it’s important to recognize that this was because of the French (Napoleon) and Roman empires, Christianity just as the brutal American and UK empires created these circumstances today

        • wredcoll 20 hours ago

          The napoleonic empire lasted about 15 years, so that's a bit of a stretch.

          More relevantly though, good things can come from people who also did bad things; this isn't to justify doing bad things in hopes something good also happens, but it doesn't mean we need to ideologically purge good things based on their creators.

bigstrat2003 a day ago

> in the global international connected computing world it doesn’t fit at all.

Most people aren't living in that world. If you're working at Amazon or some business that needs to interact with many countries around the globe, sure, you have to worry about text encoding quite a bit. But the majority of software is being written for a much narrower audience, probably for one single language in one single country. There is simply no reason for most programmers to obsess over text encoding the way so many people here like to.

  • arp242 21 hours ago

    No one is "obsessing" over anything. Reality is there are very few cases where you can use a single 8-bit character set and not run in to problems sooner or later. Say your software is used only in Greece so you use ISO-8859-7 for Greek. That works fine, but now you want to talk to your customer Günther from Germany who has been living in Greece for the last five years, or Clément from France, or Seán from Ireland and oops, you can't.

    Even plain English text can't be represented with plain ASCII (although ISO-8859-1 goes a long way).

    There are some cases where just plain ASCII is okay, but there are quite few of them (and even those are somewhat controversial).

    The solution is to just use UTF-8 everywhere. Or maybe UTF-16 if you really have to.

  • rileymat2 a day ago

    Except, this is a response to emoji support, which does have encoding issues even if your user base is in the US and only speaks English. Additionally, it is easy to have issues with data that your users use from other sources via copy and paste.

  • wat10000 20 hours ago

    Which audience makes it so you don’t have to worry about text encodings?

  • raverbashing a day ago

    This is naive at best

    Here's a better analogy, in the 70s "nobody planned" for names with 's in then. SQL injections, separators, "not in the alphabet", whatever. In the US. Where a lot of people with 's in their names live... Or double-barrelled names.

    It's a much simpler problem and still tripped a lot of people

    And then you have to support a user with a "funny name" or a business with "weird characters", or you expand your startup to Canada/Mexico and lo and behold...

    • ryandrake a day ago

      Yea, I cringe when I hear the phrase "special characters." They're only special because you, the developer, decided to treat them as special, and that's almost surely going to come back to haunt you at some point in the form of a bug.

flohofwoe a day ago

ASCII is totally fine as encoding for the lower 127 UNICODE code points. If you need to go above those 127 code points, use a different encoding like UTF-8.

Just never ever use Extended ASCII (8-bits with codepages).

[removed] a day ago
[deleted]
eru a day ago

Python 3 deals with this reasonable sensibly, too, I think. They use UTF-8 by default, but allow you to specify other encodings.

  • ynik a day ago

    Python 3 internally uses UTF-32. When exchanging data with the outside world, it uses the "default encoding" which it derives from various system settings. This usually ends up being UTF-8 on non-Windows systems, but on weird enough systems (and almost always on Windows), you can end up with a default encoding other than UTF-8. "UTF-8 mode" (https://peps.python.org/pep-0540/) fixes this but it's not yet enabled by default (this is planned for Python 3.15).

    • arcticbull a day ago

      Apparently Python uses a variety of internal representations depending on the string itself. I looked it up because I saw UTF-32 and thought there's no way that's what they do -- it's pretty much always the wrong answer.

      It uses Latin-1 for ASCII strings, UCS-2 for strings that contain code points in the BMP and UCS-4 only for strings that contain code points outside the BMP.

      It would be pretty silly for them to explode all strings to 4-byte characters.

      • jibal a day ago

        You are correct. Discussions of this topic tend to be full of unvalidated but confidently stated assertions, like "Python 3 internally uses UTF-32." Also unjustified assertions, like the OP's claim that len(" ") == 5 is "rather useless" and that "Python 3’s approach is unambiguously the worst one". Unlike in many other languages, the code points in Python's strings are always directly O(1) indexable--which can be useful--and the subject string has 5 indexable code points. That may not be the semantics that someone is looking for in a particular application, but it certainly isn't useless. And given the Python implementation of strings, the only other number that would be useful would be the number of grapheme clusters, which in this case is 1, and that count can be obtained via the grapheme or regex modules.

      • account42 a day ago

        It conceptually uses arrays of code points, which need up to 24 bits. Optimizing the storage to use smaller integers when possible is an implementation detail.

  • xigoi a day ago

    I prefer languages where strings are simply sequences of bytes and you get to decide how to interpret them.

    • zahlman a day ago

      Such languages do not have strings. Definitionally a string is a sequence of characters, and more than 256 characters exist. A byte sequence is just an encoding; if you are working with that encoding directly and have to do the interpretation yourself, you are not using a string.

      But if you do want a sequence of bytes for whatever reason, you can trivially obtain that in any version of Python.

      • capitainenemo 21 hours ago

        My experience personally with python3 (and repeated interactions with about a dozen python programmers, including core contributors) is that python3 does not let you trivially work with streams of bytes, esp if you need to do character set conversions, since a tiny python2 script that I have used for decades for conversion of character streams in terminals has proved to be repeated unportable to python3. The last attempt was much larger, still failed, and they thought they could probably do it, but it would require far more code and was not worth their effort.

        I'll probably just use rust for that script if python2 ever gets dropped by my distro. Reminds me of https://gregoryszorc.com/blog/2020/01/13/mercurial%27s-journ...

    • afiori a day ago

      I would like an utf-8 optimized bag of bytes where arbitrary byte operations are possible but the buffer keeps track of whether is it valid utf-8 or not (for every edit of n bytes it should be enough to check about n+8 bytes to validate) then utf-8 then utf-8 encoding/decoding becomes a noop and utf-8 specific apis can check quickly is the string is malformed or not.

      • account42 a day ago

        But why care if it's malformed UTF-8? And specifically, what do you want to happen when you get a malformed UTF-8 string. Keep in mind that UTF-8 is self-synchronizing so even if you encode strings into a larger text-based format without verifying them it will still be possible to decode the document. As a user I normally want my programs to pass on the string without mangling it further. Some tool throwing fatal errors because some string I don't actually care about contains an invalid UTF-8 byte sequence is the last thing I want. With strings being an arbitrary bag of bytes many programs can support arbitrary encodings or at least arbitrary ASCII-supersets without any additional effort.

        • afiori a day ago

          The main issue I can see is not garbage bytes in text but mixing of incompatible encoding eg splicing latin-1 bytes in a utf-8 string.

          My understanding of the current "always and only utf-8/unicode" zeitgeist is that is comes mostly from encoding issues among which the complexity of detecting encoding.

          I think that the current status quo is better than what came before, but I also think it could be improved.

    • bawolff a day ago

      Me too.

      The languages that i really dont get are those that force valid utf-8 everywhere but dont enforce NFC. Which is most of them but seems like the worst of both worlds.

      Non normalized unicode is just as problematic as non validated unicode imo.

    • jibal a day ago

      Python has byte arrays that allow for that, in addition to strings consisting of arrays of Unicode code points.

    • account42 a day ago

      Yes, I always roll my eyes when people complain that C strings or C++'s std::string/string_view don't have Unicode support. They are bags of bytes with support for concatenation. Any other transformation isn't going to have a "correct" way to do it so you need to be aware of what you want anyway.

      • astrange a day ago

        C strings are not bags of bytes because they can't contain 0x00.