Comment by storystarling

Comment by storystarling 5 days ago

7 replies

This sounds like a classic consistency vs latency trade-off. Enforcing strict quotas across distributed services usually requires coordination that kills performance. They likely rely on asynchronous counters that drift, meaning the frontend check passes but the backend reconciliation fails later. It is surprisingly hard to solve this without making the uploader feel sluggish.

LoganDark 5 days ago

That would explain why the front-end would allow you to attempt something that goes over your limits, but not why the back-end would reject something that doesn't go over your limits.

  • goblin89 4 days ago

    My bet at the time was that they have a bunch of hidden extra limits based on account age, IP/user agent information, etc. If that is true, their problem is that they advertise the larger limits instead of the smaller limits (to get more users signed up), and that they do not communicate when their extra limits apply and instead straight up upsell you, which are both dark patterns.

    • storystarling 4 days ago

      That sounds plausible. I've had to implement similar reputation-based limits on my own backend just to keep inference costs from exploding, so I sympathize with the fraud prevention angle. Masking that as a generic quota issue to push an upsell is pretty hostile though.

      • goblin89 4 days ago

        The feeling of being gaslit, when I calculated and recalculated the length of my tracks and compared it with limits on their pricing page, was quite unpleasant.

        Another possibility is maybe they reduced their limits from 3 to 2 hours of audio around the same time. I don’t know if it happened before or after my experience, did not read their blogs or press releases, only made sure I was well under whatever limits were currently listed on their pricing & plans page (I was probably under 2 hours as well, but as this point can’t be bothered to check). Perhaps that transition was chaotic and for some time their left hand did not know what the right hand is doing.

  • storystarling 4 days ago

    Fair point. I suspect it comes down to ghost reservations or stale caches. If a previous upload failed mid-flight but didn't roll back the quota reservation immediately, the backend thinks you're over the limit until a TTL expires. Or you delete something to free up space, but the decrement hasn't propagated to the replica checking your quota yet.

  • storystarling 4 days ago

    Fair point. I suspect it comes down to how they handle retries. If an upload times out but the counter already incremented, the system sees the space as used until an async cleanup job runs. It is really common to have ghost usage in eventually consistent systems.