Comment by demosthanos

Comment by demosthanos a day ago

7 replies

In the grand scheme of things, if we're considering everything from web to bridge building, yeah, the distinction is small. But within the world of software engineering specifically it's not all that small and it's worth being precise when we're talking about it.

Whatsapp and telecoms have a lot in common, so no one questions that they benefited a ton from the BEAM.

Airbnb, though? The main similarity is that they both send large quantities of signal over wires.

Again, none of this is to stop you from liking the BEAM, but when we're talking about professional software engineering it pays to be explicit about what the design constraints were for the products that you're using so that you can make sure that your own design constraints are not in conflict with theirs.

throwawaymaths a day ago

no. in the modern web world you often have persistent client server connections, which make it a distributed system out the gate. the most inefficient way to deal with this is to go stateless, and without smart architecture to deal with unreliable connection, it's really your best choice (and, it's fine).

since BEAM gives you smart disconnection handling, web stuff built in elixir gives you the ability to build on client-server distributed without too much headache and with good defaults.

but look, if you want a concrete example of why this sucks. how much do you hate it when you push changes to your PR on github and the CI checks on browser tab are still not updated with the new CI that has been triggered? you've got to refresh first.

if they had built github in elixir instead of ruby would almost certainly have this sync isdur solved. in maybe two or three lines of code.

  • demosthanos a day ago

    And if you need that kind of persistent immediately reactive connection and are willing to pay the price, go for it! If that's truly a requirement for you then you're in the subset of web that overlaps substantially with telecoms.

    I'm not cautioning against making the calculated decision that realtime is a core requirement and choosing the BEAM accordingly. I'm cautioning against positioning the BEAM as being designed for web use cases in general, which it's not.

    Many projects, including GitHub, do not need that kind of immediate reactivity and would not have benefited enough from the BEAM to be worth the trade-offs involved. A single example of a UX flow that could be made slightly better by rearchitecting for realtime isn't sufficient reason to justify an entirely different architecture. Engineering is about trade-offs, and too often in our field we fall for "when all you have is a hammer". Realtime architectures are one tool in a toolbox, and they aren't even the most frequently needed tool.

    • throwawaymaths a day ago

      "willing to pay the price"

      what price? learning a new language that is designed to be learned from the one you already know with fewer footguns? ok fine.

      but you make it seem like going to elixir is some kind of heavy lift or requires a devops team or something. the lift is low: for example i run a bespoke elixir app in my home on my local network for co2 monitoring.

      and for that purpose (maybe 300 lines of code? yes, i do want reactivity. wrangling longpoll for that does not sound fun to me.

      • demosthanos a day ago

        To name just a few costs that aren't worth it for many businesses:

        * A much smaller ecosystem of libraries to draw from.

        * Much weaker editor tooling than with more established languages.

        * An entirely different paradigm for deployments, monitoring, and everything else that falls under "operations" that may be incompatible with the existing infrastructure in the organization.

        * When something does go wrong, using a weird stack means you have less institutional knowledge to lean on and fewer resources from people who've been doing the same thing as you.

        * A whole new set of foot guns to dodge and UX problems to solve related to what happens when someone's connection is poor. This has come up repeatedly in discussions of Phoenix LiveView—what you get in reactivity comes at the expense of having to work harder to engineer for spotty connections than you would with a request/response model.

        * More difficulty hiring people, and an increased tendency when hiring for selecting people who are really just obsessed with a particular tool and unwilling to see when the situation calls for something else.

        There are many more, these are just the ones I can think of without having a concrete application with concrete requirements to analyze. In the end for most apps reactivity is so much a "nice to have" that it's hardly worth sacrificing the stability and predictability of the established option for moderately better support for that one aspect of UX, especially given that you can always add reactivity later if you need to at a slightly higher cost than it would have come at with Erlang.

        If reactivity is a core requirement, that's a different story. If it's polish, don't choose your architecture around it.

  • notjoemama 17 hours ago

    > in the modern web world you often have persistent client server connections

    Is this actually true though? I’d be interested if you know any data backing that perspective. I only know what I’ve worked on and my anecdotal experience doesn’t match with this statement. But I know my sphere doesn’t represent the whole. In terms of state, by now there are many ways of dealing with persistence and reconnection. Not only are most of those problems solved with existing technologies and protocols but they’re everywhere in web dev. Maybe we’re talking past each other? Did I misunderstand your point?