Comment by kxbnb
The MCP Server integration is a great addition - being able to have Claude manage VMs directly opens up interesting sandboxing patterns for agent workflows.
One thing I've been thinking about with agents running in isolated environments: how do you handle visibility into what API calls the agent is making from within the VM? Right now we rely on proxying outbound requests to see what's actually happening. Does Lume expose any of that through the MCP interface?
Nice work on the unattended setup - that's usually the painful part.
Thanks! On API call visibility - Lume's MCP interface doesn't expose outbound network traffic directly. It's focused on VM lifecycle (create, run, stop) and command execution, not network inspection.
For agent observability, we handle this at the Cua framework level rather than the VM level:
- Agent actions and tool calls are logged via our tracing integration (Laminar, OpenTelemetry) - You can see the full decision trace - what the agent saw, what it decided, what tools it invoked - For the "what HTTP requests actually went out" question, proxying is still the right approach. You could configure the VM's network to route through a transparent proxy, or set up mitmproxy inside the VM. We haven't built that into Lume itself since network inspection feels orthogonal to VM management.
That said, it's an interesting idea - exposing a proxy config option in Lume that automatically routes VM traffic through a capture layer. Would that be useful for your workflow?