Comment by jasonwatkinspdx

Comment by jasonwatkinspdx 8 hours ago

7 replies

Not an expert but I happened to read about some of the history of this a while back.

ASCII has its roots in teletype codes, which were a development from telegraph codes like Morse.

Morse code is variable length, so this made automatic telegraph machines or teletypes awkward to implement. The solution was the 5 bit Baudot code. Using a fixed length code simplified the devices. Operators could type Baudot code using one hand on a 5 key keyboard. Part of the code's design was to minimize operator fatigue.

Baudot code is why we refer to the symbol rate of modems and the like in Baud btw.

Anyhow, the next change came with instead of telegraph machines directly signaling on the wire, instead a typewriter was used to create a punched tape of codepoints, which would be loaded into the telegraph machine for transmission. Since the keyboard was now decoupled from the wire code, there was more flexibility to add additional code points. This is where stuff like "Carriage Return" and "Line Feed" originate. This got standardized by Western Union and internationally.

By the time we get to ASCII, teleprinters are common, and the early computer industry adopted punched cards pervasively as an input format. And they initially did the straightforward thing of just using the telegraph codes. But then someone at IBM came up with a new scheme that would be faster when using punch cards in sorting machines. And that became ASCII eventually.

So zooming out here the story is that we started with binary codes, then adopted new schemes as technology developed. All this happened long before the digital computing world settled on 8 bit bytes as a convention. ASCII as bytes is just a practical compromise between the older teletype codes and the newer convention.

pcthrowaway 7 hours ago

> But then someone at IBM came up with a new scheme that would be faster when using punch cards in sorting machines. And that became ASCII eventually.

Technically, the punch card processing technology was patented by inventor Herman Hollerith in 1884, and the company he founded wouldn't become IBM until 40 years later (though it was folded with 3 other companies into the Computing-Tabulating-Recording company in 1911, which would then become IBM in 1924).

To be honest though, I'm not clear how ASCII came from anything used by the punch card sorting machines, since it wasn't proposed until 1961 (by an IBM engineer, but 32 years after Hollerith's death). Do you know where I can read more about the progression here?

  • zokier 7 hours ago

    IBM also notably used EBCDIC instead of ASCII for most of their systems

    • timsneath 4 hours ago

      And just for fun, they also support what must be the most weird encoding system -- UTF-EBCDIC (https://www.ibm.com/docs/en/i/7.5.0?topic=unicode-utf-ebcdic).

      • kstrauser 3 hours ago

        Post that stuff with a content warning, would you?

        > The base EBCDIC characters and control characters in UTF-EBCDIC are the same single byte codepoint as EBCDIC CCSID 1047 while all other characters are represented by multiple bytes where each byte is not one of the invariant EBCDIC characters. Therefore, legacy applications could simply ignore codepoints that are not recognized.

        Dear god.

        • necovek 2 hours ago

          That says roughly the following when applied to UTF-8:

          "The base ASCII characters and control characters in UTF-8 are the same single byte codepoint as ISO-8859-1 while all other characters are represented by multiple bytes where each byte is not one of the invariant ASCII characters. Therefore, legacy applications could simply ignore codepoints that are not recognized."

          (I know nothing of EBCDIC, but this seems to mirror UTF-8 design)