Comment by pjmlp

Comment by pjmlp 2 days ago

34 replies

I mean those and other ones, we already have enough unsafe languages as it is.

The age of C++ is going great, despite all its warts and unsafety, thanks to compiler frameworks like GCC and LLVM, games industry, GPGPU and Khronos APIs.

Even if C++ loses everywhere else, it has enough industry mindshare to keep being relevant.

Same applies to C, in the context of UNIX clones, POSIX, Khronos, embedded.

Being like Modula-2 or Object Pascal in safety, in C like syntax, isn't enough.

pron a day ago

> we already have enough unsafe languages as it is

By that logic, we definitely have enough safe languages as it is, as there are many more. But this safe/unsafe dichotomy is silly, and is coloured by languages that are unsafe in some particular ways.

1. Memory safety is important because memory-safety violations are a common cause of dangerous security vulnerabilities. But once you remove out-of-bounds access, as Zig does, memory safety doesn't even make it to the top 5: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html I.e. the same logic that says we should focus on safety would lead us to conclude we should focus on something else.

2. Memory safety has a cost. To get it, you have to give up something else (there could even be a cost to correctness). That means that you have to consider what you're getting and what you're losing in the context of the domain you're targeting, which is not the same for all languages. C++, with its "zero-cost abstractions", believed it could be everything for everyone. That turned out not to be the case at all, and Zig is a very different language, with different goals, than C++ originally had.

Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing? Of course, but that's not an option. Even Java and Rust could prevent many more dangerous bugs - including those that are higher risk than use-after-free - if they had more facilities like those of ATS or Idris. But they don't because their designers think that the gains wouldn't be worth the cost.

If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs. That's a nice sentiment, but how and at what cost?

  • pjmlp a day ago

    We actually already have enough safe languages as well.

    I am a firm beliver in the vision of Xerox PARC for computing, and think the only reason we aren't yet there are politics, lack of funding from management for doing the right thing pushing them into the market, always looking to shareholders and the next quarter, and naturally programming language religion.

    We were already on the right direction with languages like Modula-3 and Active Oberon, following up on Cedar influences, unfortunately that isn't how the industry goes.

    • pron a day ago

      But software isn't developed for its own sake. It's built to serve some purpose, and it's through its purpose(s) that the selection pressures work. It's like Betamax fans saying that people were wrong to want a longer recording time than better picture quality. It's not enough to say that you like some approach better or to even claim that some untaken path would yield a more desirable outcome. You need to show that it actually works in the real world, with all of its complex variables. For example, in the nineties I worked on safety-critical software in Ada, but we ended up dumping it in favour of C++. It's not because we didn't recognise Ada's advantages, but because, in addition to those advantages over C++, it also had some very significant disadvantages, and in the end C++ allowed us to do what we were supposed to do better. Ada's build times alone made it so that we could write and run fewer tests, which hurt the software correctness overall more than it helped. We also ended up spending more time understanding the intricacies of the language, leaving us less time to think about the algorithm.

      • pjmlp a day ago

        Ada was impacted by the tooling price and high demands on developer workstations.

        Rational started as a company selling Ada Machines, that didn't had such issues with compilation times, but again goes down to reasons I listed why mainstream keeps ignoring such tools until finally governments are stepping in.

    • ksec a day ago

      > vision of Xerox PARC for computing

      What is that in relation to Zig and memory safety? Am I missing some context?

      • pjmlp a day ago

        Smalltalk, Interlisp-D, and Mesa/Cedar as the languages for full graphical workstations.

        Instead we got UNIX and C.

        • pron 5 hours ago

          We also got Java and Python (and VB for a while), which means there is no intrinsic, irrational bias against those approaches. A romantic view of those languages tends to ignore their serious shortcomings at the time they were presented. It's like claiming the market was irrational when it preferred VHS to Betamax despite the latter's better quality, while neglecting to mention it had a worse recording time, which mattered more to more people. When compating two things, it's not enough to mention the areas where X is better than Y; you also need to consider those where X is worse.

  • tialaramex a day ago

    > there could even be a cost to correctness

    Notice that this cost, which proponents of Zig scoff at just like C++ programmers before them, is in fact the price of admission. "OK, we're not correct but..." is actually the end of the conversation. Everybody can already do "Not correct", we had "Not correct" without a program, so all effort expended on a program was wasted unless you're correct. Correctness isn't optional.

    • pron a day ago

      It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris. From those languages' vantage point, the guarantees Rust makes are almost indistinguishable from C. Yet no one says, "the conversation is over unless we all program in languages that can actually guarantee correctness" (rather than one that guarantees the lack of the eighth most dangerous software weakness). Why? Because it's too expensive.

      A language like Rust exists precisely because correctness isn't the only concern, as most software is already written in languages that make at least as many guarantees. Rust exists because some people decide they don't want to pay the price other languages take in exchange for their guarantees, but they can afford to pay Rust's price. But the very same reasoning applies to Rust itself. If Rust exists because not all tradeoffs are attractive to everyone, then clearly its own tradeoffs are not attractive to everyone.

      The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints. If you can't do it in those constraints, it doesn't matter what guarantees you make, because the program won't exist. If you can meet the constraints, then you need to ask whether the program's correctness, performance, user-friendliness etc. are good enough to serve the software's purpose.

      And that's how you learn what software correctness researchers have known for a long time: sometimes increasing correctness guarantees can have unintuitive cost/benefit interactions that they may even end up harming correctness.

      There are similar unintuitive results in other disciplines. For example, in software security there's what I call the FP/FN paradox. It's better to have more FN (false negatives, i.e. let some attacks go through) than more FP (false positives, i.e. block interactions that aren't attacks) because FPs are more likely to lead to misconfiguration or even to abandonment of the security mechanism altogether, resulting in weaker security. So, in software security it's a well known thing that to get better security you sometimes need to make fewer guarantees or try less hard to stop all attacks.

      • Ygg2 18 hours ago

        > It isn't optional, and yet it's also not at any cost, or we'd all be programming in ATS/Idris.

        In a better, saner world, we'd writing Ada++ not C++. However, we don't live in a perfect world.

        > The goal isn't to write the most correct program; it's to write the most correct program under the project's budget and time constraints.

        The goal of ANY software engineer worth their salt should be minimizing errors and defects in their end product.

        This goal can be reached by learning to write Rust; practice makes perfect.

        If GC is acceptable or you need lower compilation times, then yes, go and write your code in C#, Java, or JavaScript.

  • Ygg2 18 hours ago

    > Given Zig's safety guarantees (which are stronger than C++'s), and given its goals (which are different from C++'s), the question should be what should we be willing to give up to gain safety from use-after-free given the language's goals. Would more safety be better if it cost nothing?

    The problem with this statement is that without a memory safety invariant your code doesn't compose. Some code might assume no UAF and other parts could and you'd have a mismatch. Just like borrow checker is viral, so is the unsafety.

    > If you don't say what Zig programmers should give up to gain more safety, saying "all new languages should be memory-safe" is about as meaningful as saying we should write fewer bugs.

    The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

    Here is how engineering in any other non-computer science field takes place. You build something. See where it breaks; try to build it again given time and budget constraints. Eventually you discover certain laws and rules. You learn the rules and commit them to a shared repository of knowledge. You work hard to codify those laws and rules into your tools and practice (via actual government laws). Furthermore, you try to build something again, with all the previous rules, tools, and accumulated knowledge.

    How it works in tech. You build something. See where it breaks, say that whoever built it was a cream-for-brain moron and you can do it better and cheaper. Completely forget what you learned building the previous iteration. See where it breaks. Blame the tools for failure; remove any forms of safety. Project cancelled due to excessive deaths. Bemoan the lack of mental power in newer hires or lack of mental swiftness in older hires. Go to step 1.

    You'll notice a stark contrast between Engineering and Computer Tech. Computer tech is pop culture. It's a place where people wage wars about whether lang X or lang Y is better. How many times did programming trend go from static to dynamically typed? How many times did programming learned a valuable lesson, only for everyone to forget it, until decades later another language resurrected it?

    Ideally, each successive language would bring us closer and closer to minimizing defects, with more (types of) safety and better guarantees. Is Rust a huge leap compared to Idris? No, it's better than Ada at memory safety that's for sure.

    But it's managed to capture a lot of attention, and it is a much stricter language than many others. It's a step towards ideal. And how do programmers react to it? With disgust and a desire for less safety.

    Sigh. I guess we deserve all the ridicule we can get.

    • pron 8 hours ago

      > The problem with this statement is that without a memory safety invariant your code doesn't compose

      Yes, but that holds for any correctness property, not just the 0.0001% of them that memory safe languages guarantee. That's why we have bugs. The reason memory safety is a focus is because out-of-bounds access is the leading cause of dangerous vulnerabilities.

      > The goal of all engineering disciplines, including software, should be a minimization of errors and defects.

      Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

      > You'll notice a stark contrast between Engineering and Computer Tech.

      I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure. As to learning our lessons, I think we do when they are actually real. Software is a large and competitive economic activity, and where's there's a real secret to more valuable software, it spreads like wildfire. For example, high-level programming languages spread like wildfire; unit tests and code review did, too. And when it comes to static and dynamic typing, the studies on the matter were inconclusive except in certain cases such as JS vs TS; and guess what? TS has spread very quickly.

      The selective pressures are high enough, and we see how well they work frequently enough that we can actually say that if some idea doesn't spread quickly, then it's likely that its impact isn't as high as its fans may claim.

      > And how do programmers react to it? With disgust and a desire for less safety.

      I don't think so. In such a large and competitive economic activity, the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me. Rust has had some measure of adoption and the likeliest explanation for why it doesn't have more is the usual one for any product: it costs too much and delivers too little.

      Let's say that the value, within memory safety, between spatial and temporal safety is split 70-30; you know what? let's say 60-40. If I can get 60% of Rust's value for 10% of Rust's cost, that a very rational thing to do. I may even be able to translate my savings into an investment in correctness that is more valuable than use-after-free.

      • Ygg2 44 minutes ago

        > Yes, but practical minimisation, not hypothetical minimisation, i.e. how can I get the least bugs while keeping all my constraints, including budget. Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

        Rust achieves practical minimization, if not outright eradication, of a set of errors even in practice. And not just memory safety errors.

        > Like I said, a language like Rust exists because minimisation of errors is not the only constraints, because if it were, there are already far more popular languages that do just as much.

        The reason Rust exists is that the field hasn't matured enough to accept better engineering practices. If everyone could write and think in pre/post/invariant way, we'd see a lot fewer issues.

        > I'm not sure I buy this, because physical, engineered objects break just as much as software does, certainly when weighted by the impact of the failure.

        Dude, the front page was about how Comet AI browser can be hacked by your page and ordered to empty your bank account. That's like your fork deciding to gut you like a fish.

        > the assumption that the most likely explanation to something is that the majority of practitioners are irrational seems strange to me.

        Why? Just because you are intelligent doesn't mean you are rational. Plenty of smart people go bonkers. And looking at the state of the field as a whole, I'd have to ask for proof it's rational.

benreesman 2 days ago

Haskell makes guarantees. Modern C++ makes predictions to within a quantifiable epsilon.

Rust makes false promises in practical situations. It invented a notion of safety that is neither well posed, nor particularly useful, nor compatible with ergonomic and efficient computing.

It's speciality is marketing and we already know the bounding box on its impact or relevance. "Vibe coding" will be a more colorful and better remembered mile marker of this lousy decade in computers than Rust, which will be an obscurity in an appendix in 100 years.

  • simonask a day ago

    There is almost nothing accurate about this comment.

    "Makes predictions to within a quantifiable epsilon"? What in the world do you mean? The industry experience with C++ is that it is extremely difficult (i.e., expensive) to get right, and C++20 or newer does not change anything about that. Whatever "epsilon" you are talking about here surely has to be very large for a number bearing that sobriquet.

    As for the mindless anti-Rust slander... I'm not sure it's worth addressing, because it reflects a complete lack of the faintest idea about what it actually does, or what problem it solves. Let me just say there's a reason the Rust community is rife with highly competent C++ refugees.

    • Ygg2 a day ago

      To be fair to GP, an error bar of 3±300 is still a quantifiable epsilon. Utterly useless, but quantifiable.

  • sshine a day ago

    > "Vibe coding" will be a more colorful and better remembered mile marker of this lousy decade in computers than Rust, which will be an obscurity in an appendix in 100 years.

    I doubt it.

    I'm teaching a course on C this fall. As textbook I've chosen "Modern C" by Jens Gustedt (updated for C23).

    I'm asked by students "Why don't you choose K&R like everyone else?"

    And while the book is from 1978 (ANSI C edition in 1988), and something I've read joyously more than once, I'm reminded of how decades of C programmers have been doing things "the old way" because that's how they're taught. As a result, the world is made of old C programs.

    With this momentum of religiously rewriting things in Rust we see in the last few years (how many other languages have rewritten OpenSSL and the GNU coreutils?), the amount of things we depend on that was incidentally rewritten in Rust grows significantly.

    Hopefully people won't be writing Rust in 100 years. Since 100 years ago mathematicians were programming mechanical calculators and analog computers, and today kids are making games. But I bet you a whole lot of infrastructure still runs Rust.

    In fact, anything that is convenient to Vibe code in the coming years will drown out other languages by volume. Rust ain't so bad for vibe coding.

    • pjmlp a day ago

      Kudos for going with modern C practices.

      There is a place to learn about history of computing, and that is where K&R C book belongs to.

      Not only is the old way, this is from the age of dumb C compilers, not taking advantage of all stuff recent standards allow compiler writers to take to next level on optimizations, not always with expected results.

      Maybe getting students to understand the ISO C draft is also an interesting exercise.

  • kelnos a day ago

    I hope in 100 years we're not using any of the languages available today. I do like Rust, and use it whenever it's appropriate, but it has its warts and sharp edges. Hopefully we'll come up with something better in the next century.

  • Ygg2 a day ago

    > Rust makes false promises in practical situations. It invented a notion of safety that is neither well posed, nor particularly useful, nor compatible with ergonomic and efficient computing.

    Please stop. Rust's promise is very simple. You get safety without the tracing GC. It also gives you tools to implement your own safe abstraction on top of unsafe, but you are mostly on your own (miri, asan, and ubsan can still be used).

    Neither Rust nor Ada nor Lean nor Haskell can guarantee there are no errors in their implementations.

    Similarly, none of the listed languages can even try to show that a bad actor can't write bad code or design bad hardware in a way that maintains their promises. If you need that, you need to invent the Omniscient Oracle, not a program.

    I hate this oft repeated Nirvana fallacy. Yes, Rust is offering you a car with seatbelts and airbags. It is not offering a car that guarantees immortality in the event of a universe collapse.

    • zelphirkalt a day ago

      People state these things about Rust's own implementation (or one of the other gazillion safe langs) potentially not being safe all the time, but the difference to unsafe languages is, that once any bug is fixed, everyone profits from it being fixed in the implementation of Rust. Everyone who uses the language and updates to a newer version that is, which often goes without code changes or minimal changes for a project. Now compare that with unsafe languages. Every single project needs to "fix" the same kind of safety issues over and over again. The language implementation can do almost nothing, except change the language to disallow unsafe stuff, which is not done, because people like backwards compatibility too much.

      • Ygg2 a day ago

        > People state these things about Rust's own implementation (or one of the other gazillion safe langs) potentially not being safe all the time

        Because it's technically true. The best kind of true!

        Sorry, I meant to say the opposite of truth. Neither Rust nor Ada.Spark, which use LLVM as a backend, can prove via that they are correct if LLVM has bugs.

        In the same way, I can't guarantee tomorrow I won't be killed by a rogue planet hitting Earth at 0.3c. So I should probably start gambling and doing coke, because we might be killed tomorrow.

        > Every single project needs to "fix" the same kind of safety issues over and over again

        I doubt that's the biggest problem. Each of the unsafe libraries in C/C++/Zig can be perfectly safe given invariants X and Y, respectively. What happens if you have two (or more) libraries with subtly non-compatible invariants? You get non-composable libraries. You end up with the reverse problem of the NPM world.