Comment by mettamage

Comment by mettamage 5 days ago

22 replies

IMO development is too complex and misdirected in general since we cargo cult FAANG.

Need AWS, Azure or GCP deployment? Ever thought about putting it on bare metal yourself? If not, why not? Because it's not best practice? Nonsense. The answer with these things is: it depends, and if your app has not that many users, you can get away with it, especially if it's a B2B or internal app.

It's also too US centric. The idea of scalability applies less to most other countries.

taminka 5 days ago

many ppl also underestimate how capable modern hardware is: for ~10usd you could handle like a million concurrent connections with a redis cluster on a handful of VPSs...

  • merb 4 days ago

    many ppl also understimate how complex it is to satisfy uptime requirements, how to scale out local infrastructure when storage > 10/50/100tb (yeah a single disk can handle that, but what about bit rot, raid stuff, etc) is involved.

    it gets worse when you need more servers because your ocr process of course needs cpu x so on a beefiy machine you can handle maybe 50 high page documents. but how do you talk to other machines, etc.

    also humans costs way more money than cloud stuff. I the cloud stuff can be managed in like 1 day per month you dont need a real person, if you have real hardware that day is not enough and you soon need a dedicated person, keeping everything up-to-date, etc.

    • rcxdude 4 days ago

      >also humans costs way more money than cloud stuff. I the cloud stuff can be managed in like 1 day per month you dont need a real person, if you have real hardware that day is not enough and you soon need a dedicated person, keeping everything up-to-date, etc.

      In my experience, I have observed the opposite: companies with on-site infrastructure have been able to manage it in the spare time of a relatively small team (especially since hardware is pretty powerful and reliable nowadays), while those with cloud infrastruture have a large team focused on just maintaining the system, because cloud pushes you into far more complex setups.

      • merb 4 days ago

        most of the time the "far more complex setup" is mostly easier than the reimplementation of kubernetes with ansible.

    • [removed] 4 days ago
      [deleted]
  • reactordev 5 days ago

    One beelink in a closet runs our entire OP’s cluster.

franga2000 5 days ago

Requirements are complex too. Even if you don't need to scale at all, you likely do need zero-downtime deployment, easy rollbacks, server fault tolerance, service isolation... If you put your apps into containers and throw them onto Kubernetes, you get a lot of that "for free" and in a well-known and well-tested way. Hand-rolling even one of those things, let alone all of them together, would take far too much effort.

  • mettamage 5 days ago

    > you likely do need zero-downtime deployment

    I know SaaS businesses that don't as they operate in a single country, within a single timezone and the availability needs to be during business days and business hours.

    > easy rollbacks

    Yea, I haven't seen exceptions at all on this. So yea.

    > server fault tolerance

    That really depends. Many B2B or internal apps are fine with a few hours, or even a day, of downtime.

    > service isolation

    Many companies just have one app and if it's a monolith, then perhaps not.

    > Hand-rolling even one of those things

    Wow, I see what you're trying to say and I agree. But it really comes across as "if you don't use something like Kubernetes you need to handroll these things yourself." And that's definitely not true. But yea, I don't think that's what you meant to say.

    Again, it depends

    • franga2000 5 days ago

      I'm definitely curious about alternatives for getting these features without k8s. Frankly, I don't like it, but I use it because it's the easiest way I've found to get all of these features. Every deployment I've seen that didn't use containers and something like k8s either didn't have a lot of these features, implemented them with a bespoke pile of shell scripts, or a mix of both.

      For context, I work in exactly that kind of "everyone in one time zone" situation and none of our customers would be losing thousands by the minute if something went down for a few hours or even a day. But I still like all the benefits of a "modern devops" approach because they don't really cost much at all and it means if I screw something up, I don't have to spend too much time unscrewing it. It took a bit more time to set up compared to a basic debian server, but then again, I was only learning it at the time and I've seen friends spin up fully production-grade Kubernetes clusters in minutes. The compute costs are also negligible in the grand scheme of things.

      • stonemetal12 4 days ago

        >I use it because it's the easiest way I've found to get all of these features. Every deployment I've seen that didn't use containers and something like k8s either didn't have a lot of these features, implemented them with a bespoke pile of shell scripts, or a mix of both.

        Features aren't pokemon you don't have to catch them all.

        Back when stackoverflow was cool and they talked about their infrastructure, they were running the whole site at 5 9s on 10-20 boxes. For a setup like that k8s would have A) required more hardware B) a complete rewrite of their system to k8sify it C) delivered no additional value.

        k8s does good things if you have multiple datacenters worth of hardware to manage, for everyone else it adds overhead for features you don't really need.

        • franga2000 4 days ago

          A) Not much more. The per-node overhead is relatively small and it's not unlikely that they could have made some efficiency gains by having a homogenous cluster that saved them some nodes to offset that.

          B) Why on earth would you need to do that? K8s is, at its core, just a thing that runs containers. Take your existing app, stick it in a container and write a little yaml explaining which other containers it connects to. It can do many other things, but just...don't use them?

          C) The value is in not having to develop orchestration in house. They already had it so yea, I wouldn't say "throw it out and go to k8s", but if you're starting from scratch and considering between "write and maintain a bunch of bespoke deployment scripts" and "just spin up Talos, write a few yaml files and call it a day" I think the latter is quite compelling.

    • kqr 4 days ago

      > I know SaaS businesses that don't as they operate in a single country, within a single timezone and the availability needs to be during business days and business hours.

      This is a bad road to go down. Management will understand the implication that it's okay to reduce reliability requirements because "we'll just do the dangerous things on the weekends!"

      After some time, developers are scheduled every other weekend and when something breaks during daytime, it's not going to be a smooth process to get it up again, because the process has always been exercised with 48 hours to spare.

      Then at some point it's "Can we deploy the new version this weekend?" "No, our $important_customer have their yearly reporting next week, and then we have that important sales demo, so we'll hold off another month on the deployment." You get further and further away from continuous integration.

  • s_Hogg 5 days ago

    Holy shit you don't get anything for _free_ as a result of adopting Kubernetes dude. The cost is in fact quite high in many cases - you adopt Kubernetes and all of the associated idiosyncrasies, which can be a lot more than what you left behind.

    • franga2000 5 days ago

      For free as in "don't have to do anything to make those features, they're included".

      What costs are you talking about? Packaging your app in a container is already quite common so if you already do that all you need to do is replace your existing yaml with a slightly different yaml.

      If you don't do that already, it's not really that difficult. Just copy-paste your your install script or rewrite your Ansible playbooks into a Dockerfile. Enjoy the free security boost as well.

      What are the other costs? Maintaining something like Talos is actually less work than a normal Linux distro. You already hopefully have a git repo and CI for testing and QA, so adding a "build and push a container" step is a simple one-time change. What am I missing here?

  • dapperdrake 5 days ago

    Unix filesystem inodes and file descriptors stick around until they are closed, even if the inode has been unlinked from a directory. The latter is usually called "deleting the file".

    All the stuff Erlang does.

    Static linking and chroot.

    The problems and the concepts and solutions have been around for a long time.

    Piles and piles of untold complexity, missing injectivity on data in the name of (leaky) abstractions and cargo-culting have been with us on the human side if things for even longer.

    And as always: technical and social problems may not always benefit from the same solutions.

    • franga2000 5 days ago

      Ok so let's say you statically link your entire project. There are many reasons you shouldn't or couldn't, but let's say you do. How do you deploy it to the server? Rsync, sure. How do you run it? Let's say a service manager like systemd. Can you start a new instance while the old one is running? Not really, you'll need to add some bash script glue. Then you need a loadbalancer to poll the readiness of the new one and shift the load. What if the new instance doesn't work right? You need to watch for that, presumably with another bash script, stop it and keep the old one as "primary". Also, you'll need to write some selinux rules to make it so if someone exploits one service, they can't pivot to others.

      Congrats, you've just rewritten half of kubernetes in bash. This isn't reducing complexity, it's NIH syndrome. You've recreated it, but in a way that nobody else can understand or maintain.

      • dapperdrake 4 days ago

        Now I see how it could have been confusing to read.

        Cannot edit anymore so amending here:

        Static liking and chroot (not as The One True Solution (TM)) but as basically Docker without Linux network namespaces.

        Linux/Docker actually wound up improving things here. And they got to spend all the money on convincing the people that like advertisements.

        And static linking mainly only becomes relevant (and then irrelevant again) in C because if boundaries between compilation units. SQLite throws all of this out. They call it an amalgamation (which also sounds better than a "unity build").

        The tools are there. They are just overused. Look at enterprise Hello World in Java for a good laugh.

        ————

        If your data lives in a database on another end if a unix or TCP socket, then I still don't see "NIH". The new binary self-tests and the old binary waits for a shutdown command record and drains its connections.

        Kernels and databases clock in at over 5M lines of code. NIH seems like missing the point there.

        And most services neither need nor have nine nines of uptime. That is usually too expensive. And always bespoke. Must be tailored to the available hardware.

        Code is less portable than people believe.

        Ten #ifdef directives and you are often dead on arrival.