Agents need more than a sandbox
Agents execute code, make HTTP requests, create files, and manage credentials. In production, every session needs isolation, persistence, and governance. Without that:- Credentials leak into LLM context and logs
- Sessions lose state between turns and restarts
- There’s no audit trail of what the agent actually did
- Agents can access anything on the network
Superserve gives every agent a governed workspace
Isolated by default
Every session runs in its own Firecracker microVM with a dedicated kernel. The agent gets full root access to execute code, install packages, and make HTTP requests - nothing leaks into another session.
Nothing disappears
The
/workspace filesystem survives across turns, restarts, and days. Pick up a session hours or weeks later - every file and conversation is exactly where the agent left it.Credentials stay hidden
A credential proxy injects API keys at the network level. The agent makes authenticated requests without ever seeing the credentials - they never appear in context, logs, or tool outputs.
Full audit trail
Every tool call, file write, and HTTP request the agent makes is logged and queryable. Get a full execution timeline for each session, not just the chat transcript.
Deploy in one command
No Dockerfile, no server code, no config files.Any framework
Superserve works with any agent framework - or no framework at all.- Claude Agent SDK
- OpenAI Agents SDK
- LangChain / LangGraph
- Mastra
- Pydantic AI
- Plain stdin/stdout - any script that reads input and writes output
What’s coming next
Tell us what matters most.Spend limits and circuit breakers
Set per-session and per-agent cost caps. If an agent enters a loop making expensive API calls, the sandbox kills it before you get a surprise bill. Configurable: max turns, max duration, max API spend.
Shared filesystems across agents
Mount a durable filesystem into multiple sandboxes simultaneously. Agents can share data, artifacts, and context without re-uploading files or wiring object-store plumbing.