sitnik 6 months ago

Hi! How does it perform under heavy load and with thousands of workflows trying to run concurrently since it relies on Postgres for a lot of things (including using a transaction)? In the end it seems that if I have an application with lots of distributed workers trying to run workflows, I'll still be limited by the CPU/memory of the DB.

sarahdellysse 6 months ago

Hi there, I think I might have found a typo in your example class in the github README. In the class's `workflow` method, shouldn't we be `await`-ing those steps?

nahuel0x 6 months ago

Can you change the workflow code for a running workflow that already advanced some steps? What support DBOS have for workflow evolution?

ilove196884 6 months ago

I know this this might sound scripted or can be considered cliche but what is the use case for DBOS.

  • qianli_cs 6 months ago

    The main use case is to build reliable programs. For example, orchestrating long-running workflows, running cron jobs, and orchestrating AI agents with human-in-the-loop.

    DBOS makes external asynchronous API calls reliable and crashproof, without needing to rely on an external orchestration service.

peterkelly 6 months ago

How do you persist execution state? Does it hook into the Python interpreter to capture referenced variables/data structures etc, so they are available when the state needs to be restored?

  • KraftyOne 6 months ago

    That work is done by the decorators! They wrap around your functions and store the execution state of your workflows in Postgres, specifically:

    - Which workflows are executing

    - What their inputs were

    - Which steps have completed

    - What their outputs were

    Here's a reference for the Postgres tables DBOS uses to manage that state: https://docs.dbos.dev/explanations/system-tables

    • CMCDragonkai 6 months ago

      All of this seems it would fit any transactional key value structure.

Dinux 6 months ago

Hai, really cool project! This is something I can actually use.

mnembrini 6 months ago

About workflow recovery: if I'm running multiple instance of my app that uses DBOS and they all crash, how do you divide the work of retrying pending workflows?

gbuk2013 6 months ago

FYI the “Build Crashproof Apps” button in your docs doesn’t do anything.

  • qianli_cs 6 months ago

    You'll need to click either the Python or TypeScript icon. We support both languages and will add more icons there.

    • gbuk2013 6 months ago

      Thanks the icons work!

      I was originally looking at the docs to see if there was any information on multi-instance (horizontally scaled) apps. Is this supported? If so, how does that work?

      • qianli_cs 6 months ago

        Yeah, DBOS Cloud automatically (horizontally) scales your apps. For self-hosting, you can spin up multiple instances and connect them to the same Postgres database. For fan-out patterns, you may leverage DBOS Queues. This works because DBOS uses Postgres for coordination, rate limiting, and concurrency control. For example, you can enqueue tasks that are processed by multiple instances; DBOS makes sure that each task is dequeued by one instance.

        Docs for Queues and Parallelism: https://docs.dbos.dev/typescript/tutorials/queue-tutorial