Comment by DenisM

Comment by DenisM 18 hours ago

5 replies

Scale-up solves a lot of problems for stable workloads. But elasticity is poor, so you either live with overprovisinoed capacity (multiples, not percentages) or fail under spiky load which often time is the most valuable moment (viral traffic, Black Friday, etc).

No one has solved this problem. Scale out is typically more elastic, at least for reads.

kragen 18 hours ago

That's a good point, but when one laptop can do 102545 transactions per second, overprovisioned capacity is kind of a more reasonable thing to use than back when you needed an Amdahl mainframe to hit 100 transactions per second.

  • DenisM 16 hours ago

    As compute becomes cheaper your argument becomes more and more true.

    But it only works if workloads remain fixed. If workloads grow at similar rates you’re back to the same problem.

    • kragen 16 hours ago

      Well, it doesn't work for the newly added workloads. But for the most part we instead have the same workloads performed less efficiently.

CuriouslyC 16 hours ago

I love hetzner for internal resources because they're not spikey. For external stuff I like to do co-processing, you can load balance to cloudflare/aws/gcp services like containers/Run/App Runner/etc.

masterj 15 hours ago

I suspect that for a large number of orgs accepting over-provisioning would be significantly cheaper than the headcount required for a more sophisticated approach while allowing faster movement due to lower overall complexity