Comment by eallam
A couple things purely from the tech angle:
- We're not really an agent framework, but more like a agent runtime that is agnostic to what framework you choose to run on our infra. We have lots of people running langchain, mastra, AI SDK, hand-rolled, etc on top of us, since we are just a compute platform. We have the building blocks needed for running any kind of agent or AI workflow: ability to run system packages (anything from chrome to ffmpeg), long-running (e.g. no timeouts), realtime updates to your frontend (including streaming tokens). We also provide queues and concurrency limits for doing stuff like multitenant concurrency, observability built on OpenTelemetry, schedules for doing ETL/ELT data stuff (including multitenant schedules). - We are TS first and believe the future of agents and AI Applications will be won by TS devs. - We have a deep integration with snapshotting so code can be written in a natural way but still exhibit continuation style behavior. For example, you can trigger another agent or task or tool to run (lets say an agent that specializes in browser use) and wait for the result as a tool call result. Instead of having to introduce a serialization boundary so you can stop compute while waiting and then rhydrate and resume through skipped "steps" or activities we instead will snapshot the process, kill it, and resume it later, continuing from the exact same process state as before. This is all handled under the hood and managed by us. We're currently using CRIU for this but will be moving to whole VM snapshots with our MicroVM release.