Comment by londons_explore

Comment by londons_explore 20 hours ago

4 replies

> The design goals of the Internet you're referring to are about networks not going offline, a global routing table with individual entries for every user is not sustainable.

With a bit of a redesign it would be. Most mesh networks tackle this problem. In the worst case, a routing table entry for every human in the world is only 8 billion entries, which would fit in RAM on a typical server today. And every optimization you do dramatically reduces that number (eg. make users who have similar network configs and peers have neighbouring addresses, allowing you to coalesce potentially millions of users into a single route)

icehawk 15 hours ago

It would fit in RAM but then you actually have to search through RAM. I have a router that is doing a very modest 3gbps of traffic, or about 2000pps that all need lookups, and about 40 updates per second that goes into that table.

I should also mention that's 40 updates per second for a default free zone of about 950,000 routes. 8 billion routes would be an minimum update frequency of ~370,000 routes per second assuming the same stability.

organsnyder 19 hours ago

What about updates? Propagating routing table changes for even 8 billion devices (assuming each human has on average one device, which is quite the assumption to make) would be incredibly resource-intensive.

corint 19 hours ago

Your challenge is getting every ISP to accept this. The routing table might fit in the RAM of a typical server, but perhaps not so easily in the RAM of many routers still deployed in the field.

It's a nice idea, but sadly it'll lose out to commercial realities in many cases.

colechristensen 19 hours ago

I'm just saying that rearchitecting the Internet / routers to support billions of routes would be a challenge, and it might just be too slow to have a routing table that big.