Comment by arcbyte
I'm working on a new Event Sourcing database that elevates the WAL into a first class application concept like a message queue. So instead of standing up a postgresql instance and a kafka instance and a bunch of custom event sourcing plumbing, you stand up this database and publish all your application events as messages. For the database part you just define the mappings from event to table row and you get read models and snapshots for free.
The real key here is how migrations over time are handled seamlessly and effortlessly. Never again do you have to meet with half a dozen teams to see what a field does and if you still need it - you can identify all the logic affecting the field and all the history of every change on the field and create a mapping. Then deploy and the system migrated data on the fly as needed.
Still in stealth mode and private github but the launch is coming.
I have done similar for a few customers. I have found that useful pattern is to have both raw queues (incoming data) and clean queue (outgoing data). Outgoing data in single queue only (so all changes are ordered, so we avoid eventual consistency) that has well-defined data model (custom DSL for defining it) and tables/REST api that corresponds 1-to-1 to the data model. Then we need mappings from raw queues to the clean queue.