Comment by KraftyOne

Comment by KraftyOne 8 hours ago

5 replies

Yes, in any durability framework there's still the possibility that a process crashes mid-step, in which case you have no choice but to restart the step.

Where DBOS really shines (vs. Temporal and other workflow systems) is a radically simpler operational model--it's just a library you can install in your app instead of a big heavyweight cluster you have to rearchitect your app to work with. This blog post goes into more detail: https://www.dbos.dev/blog/durable-execution-coding-compariso...

bjornsing 6 hours ago

> Yes, in any durability framework there's still the possibility that a process crashes mid-step, in which case you have no choice but to restart the step.

Golem [1] is an interesting counterexample to this. They run your code in a WASM runtime and essentially checkpoint execution state at every interaction with the outside world.

But it seems they are having trouble selling into the workflow orchestration market. Perhaps due to the preconception above? Or are there other drawbacks with this model that I’m not aware of?

1. https://www.golem.cloud/post/durable-execution-is-not-just-f...

  • vineyardmike 2 hours ago

    That still fundamentally suffers the same idempotency problem as any other system. When interacting with the outside world, you, the developer, need to be idempotent and enforce it.

    For example, if you call an API (the outside world) to charge the user’s credit card, and the WASM host fails and the process is restarted, you’ll need to be careful to not charge again. This can happen after the request is issued, but before the response is received/processed.

    This is no different than any other workflow library or service.

    The WASM idea is interesting, and maybe lets you be more granular in how you checkpoint (eg for complex business logic that is self-contained but expensive to repeat). The biggest win is probably for general preemption or resource management, but those are generally wins for the provider not the user. Also, this requires compiling your application into WASM, which restricts which languages/libraries/etc you can use.

  • qianli_cs 6 hours ago

    I think one potential concern with "checkpoint execution state at every interaction with the outside world" is the size of the checkpoints. Allowing users to control the granularity by explicitly specifying the scope of each step seems like a more flexible model. For example, you can group multiple external interactions into a single step and only checkpoint the final result, avoiding the overhead of saving intermediate data. If you want finer granularity, you can instead declare each external interaction as its own step.

    Plus, if the crash happens in the outside world (where you have no control), then checkpointing at finer granularity won't help.

    • bjornsing 2 hours ago

      Sure you get more control with explicit state management. But it’s also more work, and more difficult work. You can do a lot of writes to NVMe for one developer salary.

jiggunjer 8 hours ago

Oh I see. Seems Nextflow is a strong contender in the serverless orchestrator market (serverless sounds better than embedded).

From what I can tell though, NF just runs a single workflow at a time, no queue or database. It relies on filesystem caching for "durability". That's changing recently with some optional add-ons.