Comment by pron

Comment by pron a day ago

22 replies

> we already have enough unsafe languages as it is

By that logic, we definitely have enough safe languages as it is, as there are many more. But this safe/unsafe dichotomy is silly, and is coloured by languages that are unsafe in some particular ways.

1. Memory safety is important because memory-safety violations are a common cause of dangerous security vulnerabilities. But once you remove out-of-bounds access, as Zig does, memory safety doesn't even make it to the top 5: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html I.e. the same logic that says we should focus on safety would lead us to conclude we should focus on something else.

2. Memory safety has a cost. To get it, you have to give up something else (there could even be a cost to correctness). That means that you have to consider what you're getting and what you're losing in the context of the domain you're targeting, which is not the same for all languages. C++, with its "zero-cost abstractions", believed it could be everything for everyone. That turned out not to be the case at all, and Zig is a very different language, with different goals, than C++ originally had.

Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing? Of course, but that's not an option. Even Java and Rust could prevent many more dangerous bugs - including those that are higher risk than use-after-free - if they had more facilities like those of ATS or Idris. But they don't because their designers think that the gains wouldn't be worth the cost.

If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs. That's a nice sentiment, but how and at what cost?

pjmlp a day ago

We actually already have enough safe languages as well.

I am a firm beliver in the vision of Xerox PARC for computing, and think the only reason we aren't yet there are politics, lack of funding from management for doing the right thing pushing them into the market, always looking to shareholders and the next quarter, and naturally programming language religion.

We were already on the right direction with languages like Modula-3 and Active Oberon, following up on Cedar influences, unfortunately that isn't how the industry goes.

  • pron a day ago

    But software isn't developed for its own sake. It's built to serve some purpose, and it's through its purpose(s) that the selection pressures work. It's like Betamax fans saying that people were wrong to want a longer recording time than better picture quality. It's not enough to say that you like some approach better or to even claim that some untaken path would yield a more desirable outcome. You need to show that it actually works in the real world, with all of its complex variables. For example, in the nineties I worked on safety-critical software in Ada, but we ended up dumping it in favour of C++. It's not because we didn't recognise Ada's advantages, but because, in addition to those advantages over C++, it also had some very significant disadvantages, and in the end C++ allowed us to do what we were supposed to do better. Ada's build times alone made it so that we could write and run fewer tests, which hurt the software correctness overall more than it helped. We also ended up spending more time understanding the intricacies of the language, leaving us less time to think about the algorithm.

    • pjmlp a day ago

      Ada was impacted by the tooling price and high demands on developer workstations.

      Rational started as a company selling Ada Machines, that didn't had such issues with compilation times, but again goes down to reasons I listed why mainstream keeps ignoring such tools until finally governments are stepping in.

  • ksec a day ago

    > vision of Xerox PARC for computing

    What is that in relation to Zig and memory safety? Am I missing some context?

    • pjmlp a day ago

      Smalltalk, Interlisp-D, and Mesa/Cedar as the languages for full graphical workstations.

      Instead we got UNIX and C.

      • pron 5 hours ago

        We also got Java and Python (and VB for a while), which means there is no intrinsic, irrational bias against those approaches. A romantic view of those languages tends to ignore their serious shortcomings at the time they were presented. It's like claiming the market was irrational when it preferred VHS to Betamax despite the latter's better quality, while neglecting to mention it had a worse recording time, which mattered more to more people. When compating two things, it's not enough to mention the areas where X is better than Y; you also need to consider those where X is worse.

tialaramex a day ago

> there could even be a cost to correctness

Notice that this cost, which proponents of Zig scoff at just like C++ programmers before them, is in fact the price of admission. "OK, we're not correct but..." is actually the end of the conversation. Everybody can already do "Not correct", we had "Not correct" without a program, so all effort expended on a program was wasted unless you're correct. Correctness isn't optional.

  • pron a day ago

    It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris. From those languages' vantage point, the guarantees Rust makes are almost indistinguishable from C. Yet no one says, "the conversation is over unless we all program in languages that can actually guarantee correctness" (rather than one that guarantees the lack of the eighth most dangerous software weakness). Why? Because it's too expensive.

    A language like Rust exists precisely because correctness isn't the only concern, as most software is already written in languages that make at least as many guarantees. Rust exists because some people decide they don't want to pay the price other languages take in exchange for their guarantees, but they can afford to pay Rust's price. But the very same reasoning applies to Rust itself. If Rust exists because not all tradeoffs are attractive to everyone, then clearly its own tradeoffs are not attractive to everyone.

    The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints. If you can't do it in those constraints, it doesn't matter what guarantees you make, because the program won't exist. If you can meet the constraints, then you need to ask whether the program's correctness, performance, user-friendliness etc. are good enough to serve the software's purpose.

    And that's how you learn what software correctness researchers have known for a long time: sometimes increasing correctness guarantees can have unintuitive cost/benefit interactions that they may even end up harming correctness.

    There are similar unintuitive results in other disciplines. For example, in software security there's what I call the FP/FN paradox. It's better to have more FN (false negatives, i.e. let some attacks go through) than more FP (false positives, i.e. block interactions that aren't attacks) because FPs are more likely to lead to misconfiguration or even to abandonment of the security mechanism altogether, resulting in weaker security. So, in software security it's a well known thing that to get better security you sometimes need to make fewer guarantees or try less hard to stop all attacks.

    • Ygg2 18 hours ago

      > It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris.

      In a better, saner world, we'd writing Ada++ not C++. However, we don't live in a perfect world.

      > The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints.

      The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product.

      This goal can be reached by learning to write Rust; practice makes perfect.

      If GC is acceptable or you need lower compilation times, then yes, go and write your code in C#, Java, or JavaScript.

      • pron 8 hours ago

        > In a better, saner world, we'd writing Ada++ not C++.

        As someone who worked on safety-critical air-traffic-control software in the nineties, I can tell you that our reasons for shifting to C++ were completely sane. Ada had some correctness advantages compared to C++, but also disadvantages. It had drastically slower build times, which meant we couldn't test the software as frequently, and the language was very complicated that we had to spend more time digging into the minutiae of the language and less time thinking about the algorithm (C++ was simpler back then than it is now). When Java became good enough, we switched to Java.

        Build times and language complexity are important for correctness, and because of them, we were able to get better correctness with C++ than with Ada. I'm not saying this is universal and always the case, but the point is that correctness is impacted by many factors, and different projects may find achieving higher correctness in different ways. Trading off fewer use-after-free for longer build times and a more complex language may be a good tradeoff for the correctness of some projects, and a bad tradeoff for others.

        > If GC is acceptable or you

        BTW, a tracing GC - whose costs are now virtually entirely limited to a higher RAM footprint - is acceptable much more frequently than you may think. Sometimes, without being aware, languages like C, C++, Rust, or Zig may sacrifice CPU to reduce footprint, even when this tradeoff doesn't make sense. I would strongly recommend watching this talk (from the 2025 International Symposium on Memory Management), and the following Q&A about the CPU/footprint tradeoff in memory management: https://www.youtube.com/watch?v=mLNFVNXbw7I

      • CRConrad 2 hours ago

        > The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product.

        ...to the extent possible within their project budget. Otherwise the product would — as GP already pointed out — not exist at all, because the project wouldn't be undertaken in the first place.

        > This goal can be reached by learning to write Rust; practice makes perfect.

        Pretty sure it could (at least) equally well be reached by learning to write Ada.

        This one-note Rust cult is really getting rather tiresome.

        • Ygg2 an hour ago

          > The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product. > > ...to the extent possible within their project budget.

          Sure, but when other engineers discover that shit caused us many defects (e.g., asbestos as a fire insulator), they don't turn around and say, "Well, asbestos sure did cause a lot of cancer, but Cellulose Fibre doesn't shield us from neutron radiation. So it won't be preventing all cancers. Ergo, we are going back to asbestos."

          And then you have team Asbestos and team Lead paint quarrelling who has more uses.

          That's my biggest problem. The cyclic, Fad Driven Development that permeates software engineering.

          > Pretty sure it could (at least) equally well be reached by learning to write Ada.

          Not really. Ada isn't that memory safe. It mostly relies on runtime checking [1]. You need to use formal proofs with Ada SPARK to actually get memory safety on par with Rust.

          > Pretty sure it could (at least) equally well be reached by learning to write Ada.

          See above. You need Ada with Spark. At that point you get two files for each method like .c/.h, one with method definition and one with proof. For example:

              // increment.ads - the proof
              procedure Increment
                  (X : in out Integer)
              with
                Global  => null,
                Depends => (X => X),
                Pre     => X < Integer'Last,
                Post    => X = X'Old + 1;
          
              // increment.adb - the program
              procedure Increment
                (X : in out Integer)
              is
              begin
                X := X + 1;
              end Increment;
          
          But you're way past what you call programming, and are now entering proof theory.

          [1] https://ajxs.me/blog/How_Does_Adas_Memory_Safety_Compare_Aga...

Ygg2 18 hours ago

> Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing?

The problem with this statement is that without a memory safety invariant your code doesn't compose. Some code might assume no UAF and other parts could and you'd have a mismatch. Just like borrow checker is viral, so is the unsafety.

> If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs.

The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

Here is how engineering in any other non-computer science field takes place. You build something. See where it breaks; try to build it again given time and budget constraints. Eventually you discover certain laws and rules. You learn the rules and commit them to a shared repository of knowledge. You work hard to codify those laws and rules into your tools and practice (via actual government laws). Furthermore, you try to build something again, with all the previous rules, tools, and accumulated knowledge.

How it works in tech. You build something. See where it breaks, say that whoever built it was a cream-for-brain moron and you can do it better and cheaper. Completely forget what you learned building the previous iteration. See where it breaks. Blame the tools for failure; remove any forms of safety. Project cancelled due to excessive deaths. Bemoan the lack of mental power in newer hires or lack of mental swiftness in older hires. Go to step 1.

You'll notice a stark contrast between Engineering and Computer Tech. Computer tech is pop culture. It's a place where people wage wars about whether lang X or lang Y is better. How many times did programming trend go from static to dynamically typed? How many times did programming learned a valuable lesson, only for everyone to forget it, until decades later another language resurrected it?

Ideally, each successive language would bring us closer and closer to minimizing defects, with more (types of) safety and better guarantees. Is Rust a huge leap compared to Idris? No, it's better than Ada at memory safety that's for sure.

But it's managed to capture a lot of attention, and it is a much stricter language than many others. It's a step towards ideal. And how do programmers react to it? With disgust and a desire for less safety.

Sigh. I guess we deserve all the ridicule we can get.

  • pron 9 hours ago

    > The problem with this statement is that without a memory safety invariant your code doesn't compose

    Yes, but that holds for any correctness property, not just the 0.0001% of them that memory safe languages guarantee. That's why we have bugs. The reason memory safety is a focus is because out-of-bounds access is the leading cause of dangerous vulnerabilities.

    > The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

    Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

    > You'll notice a stark contrast between Engineering and Computer Tech.

    I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure. As to learning our lessons, I think we do when they are actually real. Software is a large and competitive economic activity, and where's there's a real secret to more valuable software, it spreads like wildfire. For example, high-level programming languages spread like wildfire; unit tests and code review did, too. And when it comes to static and dynamic typing, the studies on the matter were inconclusive except in certain cases such as JS vs TS; and guess what? TS has spread very quickly.

    The selective pressures are high enough, and we see how well they work frequently enough that we can actually say that if some idea doesn't spread quickly, then it's likely that its impact isn't as high as its fans may claim.

    > And how do programmers react to it? With disgust and a desire for less safety.

    I don't think so. In such a large and competitive economic activity, the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me. Rust has had some measure of adoption and the likeliest explanation for why it doesn't have more is the usual one for any product: it costs too much and delivers too little.

    Let's say that the value, within memory safety, between spatial and temporal safety is split 70-30; you know what? let's say 60-40. If I can get 60% of Rust's value for 10% of Rust's cost, that a very rational thing to do. I may even be able to translate my savings into an investment in correctness that is more valuable than use-after-free.

    • Ygg2 an hour ago

      > Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

      Rust achieves practical minimization, if not outright eradication, of a set of errors even in practice. And not just memory safety errors.

      > Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

      The reason Rust exists is that the field hasn't matured enough to accept better engineering practices. If everyone could write and think in pre/post/invariant way, we'd see a lot fewer issues.

      > I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure.

      Dude, the front page was about how Comet AI browser can be hacked by your page and ordered to empty your bank account. That's like your fork deciding to gut you like a fish.

      > the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me.

      Why? Just because you are intelligent doesn't mean you are rational. Plenty of smart people go bonkers. And looking at the state of the field as a whole, I'd have to ask for proof it's rational.