Comment by badmonster

Comment by badmonster 18 hours ago

9 replies

The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"

andersmurphy 18 hours ago

(author here) yes 100% this. This was never mean't to be a SQLite vs Postgres article per say, more about the fundamental limitations of the network databases in some contexts. Admittedly, at times I felt I struggle to convey this in the article.

slashdave 18 hours ago

Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.

  • koakuma-chan 17 hours ago

    You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.

  • SJC_Hacker 13 hours ago

    SQLite can also do in memory

    • slashdave 12 hours ago

      Yeah, very good point. It all comes down to requirements. If you require persistence, then we can start talking about redundancy and backup, and then suddenly this performance metric becomes far less relevant.

runako 8 hours ago

So much this. My inner perf engineer shudders every time I see one of these "modern" architectures that involve databases sited hundreds of miles from the application servers.

  • andersmurphy 7 hours ago

    This article is very much a reaction to that. The problem is the problem as Mike Acton would say.