hazn 3 hours ago

I remember reading that restate.dev is a 'push' based workflow and therefore works well with serverless workflows: https://news.ycombinator.com/item?id=40660568

what is your input on these two topics? aka pull vs push and working well with serverless workflows

plmpsu 6 hours ago

How does DBOS scale in a cluster? with Temporal or Dapr Workflows, applications register running their supported workflows types or activities and the workflow orchestration framework balances work across applications. How does this work in the library approach?

Also, how is DBOS handling workflow versioning?

Looking forward for your Java implementation. Thanks

  • qianli_cs 6 hours ago

    Good questions!

    DBOS naturally scales to distributed environments, with many processes/servers per application and many applications running together. The key idea is to use the database concurrency control to coordinate multiple processes. [1]

    When a DBOS workflow starts, it’s tagged with the version of the application process that launched it. This way, you can safely change workflow code without breaking existing ones. They'll continue running on the older version. As a result, rolling updates become easy and safe. [2]

    [1] https://docs.dbos.dev/architecture#using-dbos-in-a-distribut...

    [2] https://docs.dbos.dev/architecture#application-and-workflow-...

    • plmpsu 5 hours ago

      Thanks for the reply.

      So applications continuously poll the database for work? Have you done any benchmarking to evaluate the throughput of DBOS when running many workflows, activities, etc.?

      • qianli_cs 4 hours ago

        In DBOS, workflows can be invoked directly as normal function calls or enqueued. Direct calls don't require any polling. For queued workflows, each process runs a lightweight polling thread that checks for new work using `SELECT ... FOR UPDATE SKIP LOCKED` with exponential backoffs to prevent contentions, so many concurrent workers can poll efficiently. We recently wrote a blog post on durable workflows, queues, and optimizations: https://www.dbos.dev/blog/why-postgres-durable-execution

        Throughput mainly comes down to database writes: executing a workflow = 2 writes (input + output), each step = 1 write. A single Postgres instance can typically handle thousands of writes per second, and a larger one can handle tens of thousands (or even more, depending on your workload size). If you need more capacity, you can shard your app across multiple Postgres servers.

saintarian 9 hours ago

Great project! Love the library+db approach. Some questions:

1. How much work is it to add bindings for new languages? 2. I know you provide conductor as a service. What are my options for workflow recovery if I don't have outbound network access? 3. Considering this came out of https://dbos-project.github.io/, do you guys have plans beyond durable workflows?

drakenot 9 hours ago

I read the Dbos vs Temporal thing, but can you speak more about if there is a different in durability guarantees?

  • KraftyOne 8 hours ago

    The durability guarantees are similar--each workflow step is checkpointed, so if a workflow fails, it can recover from the last completed step.

    The big difference, like that blog post (https://www.dbos.dev/blog/durable-execution-coding-compariso...) describes, is the operational model. DBOS is a library you can install into your app, whereas Temporal et al. require you to rearchitect your app to run on their workers and external orchestrator.

    • dfee 8 hours ago

      This makes sense, but I wonder if there’s a place for DBOS, then, for each language?

      For example, a Rust library. Am I missing how a go library is useful for non-go applications?