Comment by throwawaymaths
Comment by throwawaymaths a day ago
no. in the modern web world you often have persistent client server connections, which make it a distributed system out the gate. the most inefficient way to deal with this is to go stateless, and without smart architecture to deal with unreliable connection, it's really your best choice (and, it's fine).
since BEAM gives you smart disconnection handling, web stuff built in elixir gives you the ability to build on client-server distributed without too much headache and with good defaults.
but look, if you want a concrete example of why this sucks. how much do you hate it when you push changes to your PR on github and the CI checks on browser tab are still not updated with the new CI that has been triggered? you've got to refresh first.
if they had built github in elixir instead of ruby would almost certainly have this sync isdur solved. in maybe two or three lines of code.
And if you need that kind of persistent immediately reactive connection and are willing to pay the price, go for it! If that's truly a requirement for you then you're in the subset of web that overlaps substantially with telecoms.
I'm not cautioning against making the calculated decision that realtime is a core requirement and choosing the BEAM accordingly. I'm cautioning against positioning the BEAM as being designed for web use cases in general, which it's not.
Many projects, including GitHub, do not need that kind of immediate reactivity and would not have benefited enough from the BEAM to be worth the trade-offs involved. A single example of a UX flow that could be made slightly better by rearchitecting for realtime isn't sufficient reason to justify an entirely different architecture. Engineering is about trade-offs, and too often in our field we fall for "when all you have is a hammer". Realtime architectures are one tool in a toolbox, and they aren't even the most frequently needed tool.