Comment by mickeyp

Comment by mickeyp 2 days ago

4 replies

Edit: disregard. I read it as he'd done it and had contention problems.

You can't. You have a single writer - it's one of the many reasons sqlite is terrible for serious work.

You'll need a multiprocessing Queue and a writer that picks off sentences one by one and commits it.

hruk 2 days ago

This is just untrue - the naive implementation (make the API call, write a single row to the db) will work fine, as transactions are quite fast on modern hardware.

What do you consider "serious" work? We've served a SaaS product from SQLite (roughly 300-500 queries per second at peak) for several years without much pain. Plus, it's not like PG and MySQL are pain-free, either - they all have their quirks.

  • mickeyp 2 days ago

    Edit: disregard. I read it as he'd done it and had contention problems.

    I mean it's not if he's got lock contention from BUSY signals, now is it, as he implies. Much of his issues will stem from transactions blocking each other; maybe they are long-lived, maybe they are not. And those 3-500 queries --- are they writes or reads? Because reads is not a problem.

    • hruk 2 days ago

      Roughly 80/20 read to write. On the instance's gp3 EBS volume (which is pretty slow), we've pushed ~700 write transactions per second without much problem.

      • mickeyp 2 days ago

        For small oltp workloads the locking is not going to be a problem. But stuff that holds the write lock for some measurable fraction of a second even will gum things up real fast. Transactions that need it for many seconds? You'll quickly be dead in the water.