Comment by eloisant

Comment by eloisant 3 days ago

102 replies

It's baffling than after 30 years, Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.

Octoth0rpe 3 days ago

> Windows is still stuck in a weird directory naming structure inherited from the 80's that no longer make sense when nobody has floppy drives.

I think you could make this same statement about *nix, except it's 10 years _worse_ (1970s). I strongly prefer the fhs over whatever MS thinks it's doing, but let's not pretend that the fhs isn't a pile of cruft (/usr/bin vs /bin, /etc for config, /media vs /mnt, etc)

  • gerdesj 3 days ago

    Unix starts at root, which is how nature intended. It does not change characteristics based on media - you can mount a floppy at root if you want.

    Why get upset over /media vs /mnt? You do you, I know I do.

    For example The Step CA docs encourage using /etc/step-ca/ (https://smallstep.com/docs/step-ca/certificate-authority-ser...) for configuration for their product. Normally I would agree but as I am manually installing this thing myself and not following any of the usual docs, I've gone for /srv/step-ca.

    I think we get enough direction from the ... "standards" ... for Unix file system layouts that any reasonably incompetent admin can find out which one is being mildly abused today and get a job done. On Windows ... good luck. I've been a sysadmin for both platforms for roughly 30 years and Windows is even odder than Unix.

    • Wowfunhappy 3 days ago

      > Unix starts at root, which is how nature intended. It does not change characteristics based on media - you can mount a floppy at root if you want.

      Why is the root of one of my drives `/` while the roots of my other drives are subdirectories of that first drive?

      • eichin 3 days ago

        Thinking of it in terms of namespaces might help; it's not that the drive is special, it's that there's a view that starts from / and one disk filesystem happens to be dropped there and others are dropped elsewhere; with something like initramfs there aren't any drives on /, just a chunk of ram, though you usually pivot to a physical one later (many linux-based embedded systems don't because your one "drive" is an SD card that can't handle real use, so you just keep the "skeleton" in memory and drop various bits of eMMC or SD or whatever into the tree as-convenient.)

        • Wowfunhappy 3 days ago

          I do get it, I just don't think that the UNIX way is necessarily more natural than the Windows way.

      • Dylan16807 3 days ago

        In multiple ways, / doesn't have to be one of your drives.

      • Hendrikto 2 days ago

        Because you (or your distro) configured it like that. You don’t have to do it that way.

      • TimeBearingDown 3 days ago

        Only the root of the root filesystem is /

        The point is that any filesystem can be chosen as the OS’s root.

        The root of all other filesystems - there could be multiple per drive - is where you tell the filesystem to be mounted, or in your automounter’s special directory, usually /run/media, where it makes a unique serial or device path.

        * clarity

  • kazinator 3 days ago

    There is more pliability in the Linux ecosystem to change some of these things.

    And anyway, there has to be a naming scheme; the naming scheme is abstracted from the storage scheme.

    It's not the case that your /var and /usr are different drives; though it can be in a given installation.

  • sli 2 days ago

    All of those are optional restrictions, not mandatory. On Windows, it's (practically) mandatory.

    Maybe some Windows wizards could get around the mandatory restrictions, but an average Linux user can get around the optional ones.

    • rusk 2 days ago

      Streaming as defacto metaphor for file access goes back to tape drives. Random Access patterns make more sense with today’s media yet we’re all still fscanf-ing

      Of course there are alternatives but the resource-as-stream metaphor is so ubiquitous in Unix, it’s hard to avoid.

    • GoblinSlayer 2 days ago

      Drive letters are just /mnt, you can get around that, even with GUI.

      • darkwater 2 days ago

        So why a default Windows install still uses and shows C:?

  • akdev1l 3 days ago

    /usr/bin vs /bin distinction is not relevant as all major distros have gone usrmerge for years now so /bin == /usr/bin (usually /bin is a symlink)

notepad0x90 3 days ago

I like being able to run games from early 2000s. Being able to write software that will still run longer after you're gone used to be a thing. But here we are with linux abandoning things like 'a.out'. Microsoft doesn't have the luxury to presume that it's users can recompile software, fork it, patch it,etc.. When your software doesn't work on the latest Windows, most people blame Microsoft not the software author.

  • Gud 3 days ago

    Ok, I prefer to use software which is future compatible, like ZFS, which is 128-bit.

    “The file system itself is 128 bit, allowing for 256 quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created. All the algorithms have been written with scalability in mind. Directories can have up to 248 (256 trillion) entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.”

    https://docs.oracle.com/cd/E19253-01/819-5461/6n7ht6qth/inde...

    Don’t want to hit the quadrillion zettabyte limit..

    • bhaney 3 days ago

      > Directories can have up to 248 (256 trillion) entries

      It took me a minute to figure out that this was supposed to be 2^48, but even then that's ~281 trillion. What a weird time for the tera/tibi binary prefix confusion to show up, when there aren't even any units being used.

    • chuckadams 2 days ago

      Someone did some back-of-the-napkin math and calculated that to populate every byte in a 128 bit storage pool, you'd need to use enough energy to literally boil the oceans. There was a blog post on oracle.com that went into more detail, but no link into Oracle survives more than 10 years.

  • amarant 3 days ago

    Wait are you saying Linux broke user-space? I've completely missed this and would like to know more, may I be so bold as to request a link?

    • cesarb 3 days ago

      > > But here we are with linux abandoning things like 'a.out'.

      > I've completely missed this and would like to know more, may I be so bold as to request a link?

      "A way out for a.out" https://lwn.net/Articles/888741/

      "Linux 6.1 Finishes Gutting Out The Old a.out Code" https://www.phoronix.com/news/Linux-6.1-Gutting-Out-a.out (with links to two earlier articles)

    • PunchyHamster 3 days ago

      Linux does occasionally remove stuff that seem to have no users and there is no good reason to have a.out binaries since... the late '90s ?

      • notepad0x90 3 days ago

        I was playing with some asm code and generating a.out with nasm, got stuck on why it wouldn't load..turns out linux stopped supporting it. When they say "no one uses it" they mean packages and stuff, they don't care about private code you have lying around and other use cases. With a widely deployed platform like windows, they can't assume things like that. There are certainly very valid business application that go back decades. There are literally systems that have 20+ years up time out there.

  • simondotau 3 days ago

    I don’t like running games from the early 2000s outside of a sandbox of some description. If you disagree, it's because we don't have sandboxes which don't suck. Ideally, running old software in a sandbox on a modern OS should be borderline transparent — not like installing XP in a virtual machine.

    While I understand the appeal of software longevity, and I think it's a noble and worthy pursuit, I also think there is an under-appreciated benefit in having unmaintained software less likely to function on modern operating systems. Especially right now, where the concept of serious personal computer security for normal consumers is arguably less than two decades old.

exidy 3 days ago

Inherited from the 80s? Microsoft effectively inherited drive letters via an 8086 semi-clone of CP/M called QDOS[0], it was the basis for PC-DOS and later MS-DOS. CP/M dates back to 1974.

But Gary Kildall didn't come up with the idea of drive letters in CP/M all on his own, he was likely influenced by TOPS-10[1] and CP/CMS[2], both from the late 60s.

[0] https://en.wikipedia.org/wiki/86-DOS

[1] https://en.wikipedia.org/wiki/TOPS-10

[2] https://en.wikipedia.org/wiki/CP/CMS

dragonwriter 2 days ago

I don't particularly like the Windows naming structure, but it made just as much sense with later removable-media-with-fixed-drives systems (like optical drives) as it did with floppy drives. It maybe makes less sense now that storage is either fixed media or detachable drives, rather than some being removable media in fixed drives, but the period after commonn removable media is a lot shorter than the period after common floppy drives.

(And mostly, I'm talking about using drive letters rather than something like what unix does. C being the first fixed media device, may seem more arbitrary now, but it was pretty arbitrary even in the floppy era.)

leptons 3 days ago

Windows can still run software from the 80's, backwards compatibility has always been a selling point for Windows, so I'd call that a win.

  • AndrewDavis 3 days ago

    Didn't Microsoft drop 16 bit application support in Windows 10? I remember being saddened by my exe of Jezzball I've carried from machine to machine no longer working.

    • mkup 3 days ago

      Microsoft has dropped 16-bit application support via builtin emulator (NTVDM) from 64-bit builds of Windows, whether it happens to be Windows 10 or earlier version of Windows, depends on user (in my case, it was Windows Vista). However, you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.

      • lmm 3 days ago

        > you can still run 16-bit apps on 64-bit builds of Windows via third party emulators, such as DOSBox and NTVDMx64.

        Or Wine, which is less reliable but funnier.

      • [removed] 3 days ago
        [deleted]
      • [removed] 3 days ago
        [deleted]
    • notepad0x90 3 days ago

      and Linux stopped supporting 32bit x86 I think around the same time? (just i386?)

      • Dylan16807 3 days ago

        Are you talking about CPU support? I installed a 32 bit program on basic linux mint just the other day. If I really need to load up a pentium 4 I can deal with it being an older kernel.

  • anonymous_sorry 3 days ago

    It's very impressive indeed.

    Linux goal is only for code compatibility - which makes complete sense given the libre/open source origins. If the culture is one where you expect to have access to the source code for the software you depend on, why should the OS developers make the compromises needed to ensure you can still run a binary compiled decades ago?

  • chasing0entropy 3 days ago

    My original VB6 apps (mostly) still run on win11

    • mananaysiempre 3 days ago

      Hmm. IME VB6 is actually a particular pain point, because MDAC (a hodgepodge of Microsoft database-access thingies) does not install even on Windows 10, and a line-of-business VB6 app is very likely to need that. And of course you can’t run apps from the 1980s on Windows 11 natively, because it can no longer run 16-bit apps, whether DOS or Windows ones. (All 32-bit Windows apps are definitionally not from the 1980s, seeing as the Tom Miller’s sailboat trip that gave us Win32 only happened in 1990. And it’s not the absence of V86 mode that’s the problem—Windows NT for Alpha could run DOS apps, using a fatter NTVDM with an included emulator. It’s purely Microsoft’s lack of desire to continue supporting that use case.)

      • p_ing 3 days ago

        > It’s purely Microsoft’s lack of desire to continue supporting that use case.

        NTVDM leverages virtual 8086 mode which is unavailable while in long mode.

        NTVDM would need to be rewritten. With alternatives like DOSBox, I can see why MSFT may not have wanted to dive into that level of backwards compat.

      • drxzcl 3 days ago

        Wait, what's the story of the sailboat trip? My searches are coming up empty, but it sounds like a great story.

        • mananaysiempre 3 days ago

          Yeah, I was surprised by the lack of search results when I was double-checking my post too, but apparently I wasn’t surprised enough, because I was wrong. I mixed up two pieces of Showstopper!: chapter 5 mentions the Win32 spec being initially written in two weeks by Lucovsky and Wood

          > Lucovsky was more fastidious than Wood, but otherwise they had much in common: tremendous concentration, the ability to produce a lot of code fast, a distaste for excessive documentation and self-confidence bordering on megalomania. Within two weeks, they wrote an eighty-page paper describing proposed NT versions of hundreds of Windows APIs.

          and chapter 6 mentions the NTFS spec being initially written in two weeks by Miller and one other person on Miller’s sailboat.

          > Maritz decided that Miller could write a spec for NTFS, but he reserved the right to kill the file system before the actual coding of it began.

          > Miller gathered some pens and pads, two weeks’ worth of provisions and prepared for a lengthy trip on his twenty-eight-foot sailboat. Miller felt that spec writing benefited from solitude, and the ocean offered plenty of it. [...] Rather than sail alone, Miller arranged with Perazzoli, who officially took care of the file team, to fly in a programmer Miller knew well. He lived in Switzerland.

          > In August, Miller and his sidekick set sail for two weeks. The routine was easy: Work in the morning, talking and scratching out notes on a pad, then sail somewhere, then talk and scratch out more notes, then anchor by evening and relax.

          (I’m still relatively confident that the Win32 spec was written in 1990; at the very least, Showstopper! mentions it being shown to a group of app writers on December 17 of that year.)

ForOldHack 3 days ago

In the 80s, running DOS 3.1 on an IBM Network, I was networking dual floppy PCs, and with testing, got through drive '!' '@' '#' '^' So I was able to use 26 floppies, 24 of them non local... It was all removed with the next release, 3.2, so I would make some bets about NT Networking and its NetBIOS roots.

I was inspired by the Dr Seuss, "On beyond Zebra."

naikrovek 3 days ago

It’s not baffling at all. They strongly value maintaining backwards compatibility guarantees.

For example, Windows 11 has no backwards compatibility guarantees for DOS but operating systems that they do have backwards compatibility guarantees for do.

Enterprises need Microsoft to maintain these for as long as possible.

It is AMAZING how much inertia software has that hardware doesn’t, given how difficult each are to create.

  • monocasa 2 days ago

    They've stopped caring as much about backwards compat.

    Windows 10 no longer plays the first Crysis without binary patches for instance.

    • krige 2 days ago

      Things that go through the proper channels are usually compatible. Crysis was never the most stable of games and IIRC it used 3DNow, which is deprecated - but not by Windows.

      As a counter-anecdata, last week I ran Galapagos: Mendel's Escape with zero compat patches or settings, that's a 1997 3D game just working.

      • account42 2 days ago

        > Things that go through the proper channels are usually compatible.

        But that's a pretty low bar - previously Windows went to great lengths to preserve backwards compatibility even for programs that are out of spec.

        If you just care about keeping things working if they were done "correctly" then the average Linux desktop can do that too - both for native Linux programs (glibc and a small list of other base system libraries have strong backwards compatibility) as well as for Windows programs via Wine.

        • krige 2 days ago

          On paper maybe. In practice there's currently at least one case that directly affects me where Wine-patched Windows software still works on Windows thanks to said patch... but doesn't work under Wine anymore.

    • 7bit 2 days ago

      Theres a big difference between Enterprise-Level software and games.

      Windows earns money mainly in the enterprise sector, so that's where the backwards-compatibility effort is. Not gaming. That's just a side effect.

      Anecdotal, you can run 16bit games (swing; 1997) on Windows, only if you patch 2-3 DirectX related files.

      • monocasa 2 days ago

        The prototypical examples given in the past were for applications like Sim City, hardly bastions of enterprise software.

        And with win11, Microsoft stopped shipping 32bit versions of the OS, and since they don't support 16bit mode on 64bit OSes, you actually can't run any 16bit games at all.

  • wizzwizz4 3 days ago

    The 3.5mm audio jack is 75 years old, but electrically-compatible with a nearly 150-year-old standard.

    • anthk 2 days ago

      Victorian teletypes can be hooked to a serial port with a trivial adapter, at least enough to use CP/M and most single-case OS'es.

      Also, some programming languages have a setting to export code compatible with just Baudot characters: http://t3x.org/nmhbasic/index.html

      So, you could feed it from paper tape and maybe Morse too.

    • naikrovek 3 days ago

      Yeah speakers haven’t changed enough to make the 3.5mm connector obsolete.

      • cesarb 3 days ago

        Many new devices use a 2.5mm audio jack instead of the 3.5mm audio jack.

        • naikrovek 2 days ago

          Yes, but that doesn’t obsolete the 3.5mm jack or the 1/4” jack. It’s just a different form factor of the same thing.

kqr 3 days ago

Wait 'til you hear about the PDP-11 emulator of a CPU it is running on.

BobbyTables2 3 days ago

Yeah, try explaining “drive C:” to a kid these days, and why it isn’t A: or B: …

Of course software developers are still stuck with 80 column conventions even though we have 16x9 4K displays now… Didn’t that come from punchcards ???

  • strogonoff 3 days ago

    Come for punchcards, stay for legibility.

    80 characters per line is an odd convention in the sense that it originated from a technical limitation, but is in fact a rule of thumb perfectly familiar to any typesetting professional from long before personal computing became widespread.

    Remember newspapers? Laying the text out in columns[0] is not a random quirk or result of yet another technology limitation. It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.

    The reason is that when each line is shorter, the entire thing becomes easier to read. Indeed, even accounting for legibility hit caused by hyphenation.

    Up to a point, of course. That point may differ depending on the medium and the nature of the material: newspapers, given they deal with solid plain text and have other layout concerns, limit a line to around 50 characters; a book may go up to 80 characters. Given a program is not a relaxed fireside reading, I would place it closer to the former, but there are also factors and conventions that could bring acceptable line length up. For example, indentation and syntax highlighting, or typical identifier length (I’m looking at you, CNLabelContactRelationYoungerCousinMothersSiblingsDaughterOrFathersSistersDaughter), or editor capability to wrap lines nicely[1].

    Finally, since the actual technical limitation is gone, it is actually not such a big deal to violate the line length rule on occasion.

    [0] Relatedly, codebases roughly following the 80 character line length limitation unlock more interesting columnar layouts in editors and multiplexers.

    [1] Isn’t the auto-wrap capability in today’s editors good enough that restricting line length is pointless at the authoring stage? Not really, and (arguably) especially not in case of any language that relies on indentation. Not that it could not be good enough, but considering code becomes increasingly write-only it seems unlikely we will see editors with perfect, context-sensitive, auto-wrap any time soon.

    • naikrovek 3 days ago

      I’m very sure this is a myth. Like any good myth, it makes sense on the surface but holds zero water once you look close.

      Code isn’t prose. Code doesn’t always go to the line length limit then wrap, and prose doesn’t need a new line after every sentence. (Don’t nitpick this; you know what I’m saying)

      The rules about how code and prose are formatted are different, so how the human brain finds the readability of each is necessarily different.

      No code readability studies specifically looking for optimal line length have been done, to my knowledge. It may turn out to be the same as prose, but I doubt it. I think it will be different depending on the language and the size of the keywords in the language and the size of the given codebase. Longer keywords and method/function names will naturally lead to longer comfortable line lengths.

      Line length is more about concepts per line, or words per line, than it is characters per line.

      The 80-column limit was originally a technical one only. It has remained because of backwards compatibility and tradition.

    • PaulDavisThe1st 3 days ago

      When I read text I prefer it to use the lessons

      of typography and not be overly wide, lest my saccadic

      motion leads my immersion and comprehension astray.

          However when I read code I do not want to scan downwards to complete the semantics of a given expression because that will also break my comprehension and so when a line of code is long I'd prefer for it to remain long unless there are actually multiple clauses
      
          and other conditionally chained
      
          semantic elements
      
          that are more easily read alone
      • iknowstuff 3 days ago

        oof this looks awful on mobile, with extra line breaks

    • Xss3 3 days ago

      80 chars per line was invented when languages used shortened commands though. Nowadays 120 is more appropriate. Especially in Powershell. Not so much in bash where commands are short, 80 can stay alive there!

    • justsomehnguy 3 days ago

      > It is the same reason a good blog layout sets a conservative maximum width for when it is read on a landscape oriented screen.

      Except 99.9% of times it's becomes 50 characters with 32pt font which occupies ~25% of the horizontal space on a 43".

      "Good" my ass.

    • int_19h 2 days ago

      The right answer to this is that IDEs should wrap lines automatically according to the actual dimensions of the editor, but they need to understand the syntax of the language they are wrapping to do that right.

    • UltraSane 2 days ago

      The 80 char max line width convention makes no sense with modern monitor resolutions and ultrawides being very common.

  • Sharlin 3 days ago

    It did, but 80 columns also pretty closely matches the 50ish em/70ish character paragraph width that’s usually recommended for readability. I myself wouldn’t go much higher than 100 columns with code.

  • ahoef 3 days ago

    While 80 characters is obviously quite short, my experience is that longer line lengths result in much less readable code. You have to try to be concise on shorter lines, with better phrasing.

  • account42 2 days ago

    > Of course software developers are still stuck with 80 column conventions

    Speak for yourself, all my projects use at least 100 if not 120 column lines (soft limit only).

    Trying to keep lines at a readable length is still a valid goal though, even without the original technical limitations - although the bigger win there is to keep expression short, not to just wrap them into shorter lines.

  • tracker1 2 days ago

    If you don't have some level of arbitrary limit on line length, it becomes all that much easier to sneak in malicious code prefixed by a bunch of whitespace.

    Linting and autoformats help here... just allowing any length of line in code is just asking to get pwned at some point.

  • int_19h 2 days ago

    Try explaining /usr to a kid these days.

    "That obviously means Users, so that's where the home directories are, right?"

    "Well, no. And it actually means Unix System Resources"

    (but historically it was in fact "user", just not in that sense)

    I'm sure we'll eventually bacronym C: as well.

  • perching_aix 3 days ago

    It really wouldn't be much of a conversation. Historical conventions are a thing in general. Just think of the direction of electron flow.

    > even though we have 16x9 4K displays now

    Pretty much no normal person uses those at 100% scaling though, so unless you're thinking of the fellas who use a TV for a monitor, that doesn't actually help so much:

    - 100% scaling: 6 panels of 80 columns fit, no px go to waste

    - 125% scaling: 4 panels of 80 columns fit, 64 px go to waste (8 cols)

    - 150% scaling: 4 panels of 80 columns fit, no px go to waste

    - 175% scaling: 3 panels of 80 columns fit, 274 px go to waste (34 cols)

    - 200% scaling: 3 panels of 80 columns fit, no px go to waste

    This sounds good until you need any additional side panels. Think line numbers, scrollbars, breakpoint indicators, or worse: minimaps, and a directory browser. A minimap is usually 20 cols/panel, a directory browser is usually 40 cols. Scrollbar and bp-indicator together 2 cols/panel. Line numbers, probably safe to say, no more than 6 cols/panel.

    With 2 panels, this works out to an entire additional panel in overhead, so out of 3 panels only 2 remain usable. That's the fate of the 175% and 200% options. So what is the "appropriate" scaling to use?

    Well PPI-wise, if you're rocking a 32" model, then 150%. If a 27" model, then 175%. And of course, given a 22"-23"-24" unit, then 200%. People of course get sold on these for the "additional screen real estate" though, so they'll instead sacrifice seeing the entire screen at once and will put on their glasses. Maybe you prefer to drop down by 25% for each of these.

    All of this is to say, it's not all that unreasonable. I personally feel a bit more comfortable with a 100 col margin, but I do definitely appreciate when various files nicely keep to the 80 col mark, they're a lot nicer to work with side-by-side.

  • SomeUserName432 2 days ago

    You can make harddrives to A: and B: just fine.

    This will generally work with everything using the Win32 C api.

    You will however run into weird issues when using .Net, with sudden invalid paths etc.

  • UltraSane 2 days ago

    The fact that modern interactive command shells are based on virtual teletype terminals is just absurd when you think about it

  • mavhc 3 days ago

    Try explaining files to a kid these days

zoeysmithe 2 days ago

I mean its a successful commercial project because it doesnt break things, at least not that often. You can run some really old software on windows. Its kind of taken for granted, but this is just not the norm is most industries.

As for baffling, I mean, I type in things like 'grep' everyday which is a goofy word. I'm not even going to go into all the legacy stuff linux presents and how linux, like windows, tries hard not to break userland software.

PunchyHamster 3 days ago

I had game partition mounted as subpath on a drive and it just not worked well with some apps.

Some apps (in this case Steam) don't run "what is is space in current path" (despise say GetDiskFreeSpaceExW accepting full path just fine), they cut it to the drive letter, which causes them to display space of the root drive, not the actual directory that they are using and in my case was mounted as different partition

fortran77 3 days ago

It's baffling tha[t] after 59 years , Unix is still stuck in a weird directory naming structure inherited from the the late 60s that no longer make[s] sense when nobody has floppy drives.