Comment by fxtentacle

Comment by fxtentacle 17 hours ago

2 replies

I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>

time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"

real 0m1.427s

"if you're taking 200ms for a DB query on an app with millions of users, you're screwed"

My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.

sgarland 16 hours ago

Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.

Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.

e12e 11 hours ago

That seems really slow for a get request to hn without a session cookie (fetching only cacheable data).

And being not logged in - probably a poor comparison with Shopify app.