Hypergrowth isn’t always easy
(tailscale.com)122 points by usrme 3 days ago
122 points by usrme 3 days ago
> Hypergrowth is a synonym for unsustainable growth.
No it's not. It's often a recognition that just one or two, maybe three companies will end up dominating a particular market simply due to economies of scale and network effects... and so the choice is between hypergrowth to try to attain/keep the #1 or #2 position, or else go out of business and lose all the time, money, and effort you already put into it.
Nothing whatsoever makes it unsustainable. You might be offering cheaper prices during hypergrowth -- those are unsustainable -- but then you raise prices back to sustainable levels afterwards. And consumers got to benefit from the subsidized prices, yay! The business is entirely sustainable, however.
Uber is the poster child of hypergrowth. They became profitable in 2023. And their stock price has ~doubled since. Totally sustainable.
> Hypergrowth is a synonym for unsustainable growth. The headline here is business breaks tech, again.
That just isn't true. Plenty of services do just fine after experiencing hypergrowth, and a few outages are not an example of tech breaking. That's a fairly common occurrence.
I'm not saying companies can't do fine in many respects after experiencing hypergrowth, but like you said, that's after hypergrowth - the hypergrowth isn't sustainable.
And I disagree: outages are a fairly literal example of tech breaking. A few outages aren't catastrophic though, and I agree are fairly common. I know it's cliche, but "move fast and break things" might get growth, but it also gets broken things along the way.
Hypergrowth is growth and churn at the expense of sustainability and stability. It can definitely be fun though!
The last time I looked (i.e. A couple of days ago), the documents sounded like Headscale now supports DERP [0].
[0]: https://headscale.net/stable/setup/requirements/#ports-in-us...
It’s not super well fleshed out by Tailscale but they have a guide.
https://tailscale.com/kb/1118/custom-derp-servers
My last company ran our own DERP servers to have more consistent endpoints we controlled
I use the built in derp server. I have run a standalone derp server hackily deployed for a month, it worked fine but didn't provide much benefit over the built in one. It was basically just a go package. If you're familiar with running Go code, it's straight forward to run, it's very, very light/unproductionised.
I have a todo task to integrate derp into my headscale deployment properly ("finish ansible role"), but when I picked it up last month, I noticed tailscale had release relay nodes, and they seem like they'd be better suited than dedicated derp nodes, but headscale hasn't implemented support for them yet.
tldr: not to hard to host DERP, just needs publicly facing endpoint (incl. letsencrypt) but the built in one is fine. But relay nodes look like they'll be a better option for most and I'd guess will be implemented in headscale sometime this year.
Tech is simply the reproductive organs for capitalism
So, things are working as designed for the few people that benefit
Why is the cover image for the post a cartoon 69 position?
A little reward for anyone who was affected by the outage?
TailScale is a VPN, and the article highlights a recent increase in user base. This is likely due to VPNs being required to access pornographic materials for residents of many US states.
It could also be that Tailscale users have many kids, who then also use Tailscale. Although if the header is meant to represent that, it's showing the wrong position.
> This is likely due to VPNs being required to access pornographic materials for residents of many US states.
Same in the UK, recently.
420 is very controversial so what choice do you really have this days
Not my deviantart ass thinking hypergrowth meant something else
pack considerably more spinning disks bought off the shelf at radioshack than you ever reasonably should in a box in a colo, turns out they generate a lot of heat, I don't recall the year I'd guess around 2004/5-ish - was a big problem, site was down for quite some time. Same year someone found out who one of our mods was and showed up to their house with a gun. Ask me about hypergrowth, I'm not sure if the DeviantART stories or the DigitalOcean stories are more wild. heh. :)
hypergrowth is very hard. First to be able to get there and then, once there, to keep up offering quality services.
Hypergrowth can be natural. Random example but what if you designed a microblogging service and all of the sudden the biggest platform gets bought by a facist and users come flocking? You could start turning users away or you could work as fast as you can to accommodate them and make small mistakes along the way. Both of these are reasonable decisions and neither one is really wrong.
That's demand driven and organic, at least, and it's not the first thing that comes to mind with hypergrowth, it's just scale.
Instead, I think of hypergrowth as a supply-side attempt to capture a larger market in a highly inorganic way and to also capture the absurdly high valuation that comes with it. Usually through VC.
I think what you are referring to is the economical model of growth at all costs (for this I use the term blitzscaling)
I think of hyperscaling as more like growth faster than what the team can manage, for any reason.
Virality would be a factor in this too, which is totally demand-side even if there are levers that can be pulled to induce it artificially, but that's getting towards dead internet theory and engagement-bait I think and it's more on the media/consumption side of things.
> just be sustainable, that's okay too
Not if most of your company was built on investor money.
They want their pay day!
Kind of annoying to read. No, the P in CAP theorem isn’t when the client can’t connect to your unavailable service. That would be the A. Maybe it was down because of a P on your side, but don’t start blaming your downtime on network partitions between the client and your service.
Edit: your service going down and not being able to take requests from clients does not a network partition make
This is a common misunderstanding about the poorly named ‘Availability’ in CAP. Availability under CAP means that if your request reaches a non-failing node, that node still responds despite being unable to communicate with other nodes. This is distinct from SLA-style availability, which describes the uptime of the overall system. I’m pretty sure the partition tolerance they’re referring to is the fact that the tailnet remains intact and continues to operate even when nodes can’t reach the coordination service.
Isn't Availablility the ability to connect to something? If I'm calling from region A to region A servers, and the region A servers' networks go down. Well, my client is clever and can failover to region B servers. Except, all my state and context was on region A servers, and maybe that state wasn't replicated over to region B - that replication might only happen on a nightly basis.
When I reconnect, my dating profile is missing all the pictures I uploaded of me in my new convertible with me lowering my sunglasses and winking at the camera.
The LovinHuggin.com server architecture is Available, but not Partition Tolerant. And after I upload different pictures of me in tuxedos and talking like a boss on the cellphone to region B, I've potentially created a weird "split brain" situation. Region A and region B servers have different views of me. Both views are super hot, but the client might get confused if my session returns to region A when their network heals, and the nightly region replication might be messy with reconciling the split brain. Eventual consistency is a helpful (or fraught) feature to have in the database when things like split brain happen.
A network partition between the client and server is a network partition between two nodes in a distributed system, which is the P.
Interesting post. I appreciate their candor and self-criticism, but, as a customer, I'm consistently surprised by how robust Tailscale ends up being, and how rarely I've experienced an issue that actually broke my tailnet. The sort of downtime that might keep me from accessing the admin tool or something else like that is rare enough, but my nodes have almost (?) never failed to talk to each other. Pretty great.
Caveat: I have a very small tailnet (<100 nodes). Anyone running with thousands of nodes may have a very different experience where inconvenience might be existential.