Comment by joaohaas

Comment by joaohaas a day ago

25 replies

Since the post is about the benefits of react, I'm sure if requests were involved they would mention it.

Also, even if it was involved, 200ms for round-trip and DB queries is complete bonkers. Most round-trips don't take more than 100ms, and if you're taking 200ms for a DB query on an app with millions of users, you're screwed. Most queries should take max 20-30ms, with some outliers in places where optimization is hard taking up to 80ms.

bobnamob a day ago

> 200ms for round-trip and DB queries is complete bonkers

Never lived in Australia I see

  • yxhuvud a day ago

    If Shopify app P75 response time is that slow due to that the users are in Australia, then they should get a data center there.

    • netdevphoenix 21 hours ago

      In the real world, you can't just optimise for the sake of it. You need to get a business case for it. Because it all boils down to revenue vs expenses

      • philipwhiuk 19 hours ago

        If the P75 is bad because of Australia that means 25% of their customer base is Australian.

    • bobnamob a day ago

      Should they?

      You could do the maths on conversion rate increase if that latency disappeared vs the cost of spinning up a dc & running it (including the mess that is localised dbs)

      I’m not sure the economics works out for most businesses (I say this as an Australian)

      • yxhuvud 18 hours ago

        Probably not, because the if-statement in my post is likely false. The Australian user base is likely not high enough.

xmprt a day ago

> Most queries should take max 20-30ms

Most queries are 20-30ms. But a worst case of 200ms for large payloads or edge cases or just general degradations isn't crazy. Without knowing if 500ms is a p50 or p99 it's kind of a meaningless metric but assuming it's a p99, I think it's not as bad as the original commenter stated.

  • gooosle a day ago

    They mention later in the article that the 500ms is p75.

    Realistically 50ms p75 should be achievable for the level of complexity in the shopify app.

    • bushbaba a day ago

      P75. I can only image the p90 and p99 are upwards of 1 second.

      • akie 20 hours ago

        Agreed. The P95 and P99 in particular are likely to be over 1 second, possibly over 2. They chose P75 to be able to post a seemingly impressive number.

        I personally wouldn't be very happy with a P75 of 500 ms. It's slow.

  • spockz 15 hours ago

    Ah. I see we are spoiled with <4ms queries on our database. See, it all depends on perspective and use case. :)

fxtentacle 17 hours ago

I have a 160ms ping to news.ycombinator.com. Loading your comment took 1.427s of wall clock time. <s>Clearly, HN is so bad, it's complete bonkers ;)</s>

time curl -o tmp.del "https://news.ycombinator.com/item?id=42730748"

real 0m1.427s

"if you're taking 200ms for a DB query on an app with millions of users, you're screwed"

My calculation was 200ms for the DB queries and the time it takes your server-side framework ORM system to parse the results and transform it into JSON. But even in general, I disagree. For high-throughput systems it typically makes sense to make the servers stateless (which adds additional DB queries) in exchange for the ability to just start 20 servers in parallel. And especially for PostgreSql index scans where all the IO is cached in RAM anyway, single-core CPU performance quickly becomes a bottleneck. But a 100+ core EPYC machine can still reach 1000+ TPS for index scans that take 100ms each. And, BTW, the basic Shopify plan only allows 1 visitor per 17 seconds to your shop. That means a single EPYC server could still host 17,000 customers on the basic plan even if each visit causes 100ms of DB queries.

  • sgarland 16 hours ago

    Having indices doesn’t guarantee anything is cached, it just means that fetching tuples is often faster. And unless you have a covering index, you’re still going to have to hit the heap (which itself might also be partially or fully cached). Even then, you still might have to hit the heap to determine tuple visibility, if the pages are being frequently updated.

    Also, Postgres has supported parallel scans for quite a long time, so single-core performance isn’t necessarily the dominating factor.

  • e12e 11 hours ago

    That seems really slow for a get request to hn without a session cookie (fetching only cacheable data).

    And being not logged in - probably a poor comparison with Shopify app.

andy_ppp a day ago

I do not understand this thinking at all, a parsed response into whatever rendering engine, even if extremely fast is going to be a large percentage of this 500ms page load. Diminishing it with magical thinking about pure database queries under load with no understanding of the complexity of Shopify is quite frankly ridiculous, next up you’ll be telling everyone to roll there own file sharing with rsync or something…

  • flohofwoe a day ago

    I know - old man yells at cloud and stuff - but some 8-bit home computers from the 80s completed their entire boot sequence in about half a second. What does a 'UI rendering engine' need to do that takes half a second on a device that's tens of thousands of times faster? Everything on modern computers should be 'instant' (some of that time may include internet latency of course, but I assume that the Shopify devs don't live on the moon).

    • chrisandchris a day ago

      Moorsches Law v2 (/s) states that while computers get faster, we add more layers so computers actually get slower.

      • ezekiel68 21 hours ago

        Back when "WinTel" was a true duopoly, we used to call this "Gates Law".

    • netdevphoenix 21 hours ago

      Not sure why people keep bringing the old (my machine x years ago was faster). Machines nowadays do way more than machines from 80s. Whether the tasks they do are useful or not is separate discussion.

      • sgarland 16 hours ago

        Casey Muratori has a clip [0] discussing the performance differences between Visual Studio in 2004 vs. today.

        Anecdotally, I’ve been playing AoE2: DE a lot recently, and have noticed it briefly stuttering / freezing during battles. My PC isn’t state of the art by any means (Ryzen 7 3700X, 32GB PC4-24000, RX580 8GB), but this is an isometric RTS we’re talking about. In 2004, I was playing AoE2 (the original) on an AMD XP2000+ with maybe 1GB of RAM at most. I do not ever remember it stuttering, freezing, or in any way struggling. Prior to that, I was playing it on a Pentium III 550 MHz, and a Celeron 333 MHz. Same thing.

        A great anti-example of this pattern is Factorio. It’s also an isometric top-down game, with RTS elements, but the devs are serious about performance. It’s tracking god knows how many tens or hundreds of thousands of objects (they’re simulating fluid flow in pipes FFS), with a goal of 60 FPS/UPS.

        Yes, computers today are doing more than computers from the 80s or 90s, but the hardware is so many orders of magnitude faster that it shouldn’t matter. Software is by and large slower, and it’s a deliberate choice, because it doesn’t have to be that way.

        [0]: https://www.youtube.com/watch?v=MR4i3Ho9zZY

    • kristiandupont 20 hours ago

      Sure, and the screen in text mode was 80 x 25 chars = 2000 bytes of memory. A new phone has perhaps three million pixels, each taking 4 bytes. There's a significant difference.

      • flohofwoe 17 hours ago

        And yet the GPU in your phone can run a small program for each pixel taking hundreds or even thousands of clock cycles to complete and still hit a 60Hz frame rate or more. It's not the hardware that's the problem, but the modern software Jenga tower that drives it.