Comment by jrochkind1
Comment by jrochkind1 2 days ago
> The checkpoint system worked like this: every time we needed to perform a write, we would generate a UUID, store this UUID into a “checkpoint” for the current flow....
>Logically, that was… fine. In practice, writes into the same database which previously required 5 IO round trips, now required almost double the number of trips for the extra checkpointing operations...
So this sounds like trying to invent a form of optimistic locking, but it's definitely possible to do optimistic locking where no extra reads or writes are necessary unless there is a conflict. you have to write that value (whether timestamp or uuid) on every write, and then you do every write with a conditional "do this write as long as the lock value has not been changed."
But I guess the store they were using didn't allow even a conditional write like that? Although I'm not sure how they managed to make this "checkpoint" system work without that feature either... this seems a bit confusing, it seems like a more typical optimistic locking system should have been possible using whatever primitives the checkpointing system used? Maybe not?
The KV store had etag support for conditional writes. Etags are only useful to make sure the data didn't change underneath between your read and your write.
Storing the checkpoints along with the mutation was for idempotency. If the checkpoint was in the document, that meant the mutation had succeeded and a retry should be no-op