Skip to main content
Superserve is a platform purpose-built to run agents in the cloud. It gives your agent a persistent, isolated workspace with full access to a computing environment - without you having to manage infrastructure. With a single CLI command, your agent is deployed as a service with best-in-class performance, security, and scalability. Think of it as a container, but with hardware isolation, millisecond cold starts, and stateful scale-to-zero.

Why Superserve

Agents get real work done when they have access to a computer for running code, using a browser, managing filesystems. But giving agents that access in the cloud raises hard operational questions:
  • Session and hardware isolation: each agent should run in its own secure environment reducing the vulnerability surface area
  • Stateful scale-to-zero: scaling down without losing state, and resuming instantly
  • Network controls and credentials management: egress controls, credential injection, and keeping secrets out of agent context
Superserve handles all of this so teams can focus on building agents.

Key Features

Isolated by Default

Every agent session runs in an isolated environment with zero risk to your infrastructure.

Stateful Scale-to-Zero

Idle environments suspend automatically. When resumed, the full environment is restored in milliseconds. You pay nothing while agents are idle, and lose nothing when they wake up.

Network Controls

Egress policies control which domains your agent can reach. A credential proxy injects API keys at the network level so they never appear in LLM context, logs, or tool outputs.

Framework agnostic

Works with your custom agent or Claude Agent SDK, OpenAI Agents SDK, LangChain, Mastra, Pydantic AI.

One Command

superserve deploy agent.py deploys your agent to the cloud, handling isolation, security and scalability

Integration SDK

Superserve SDK lets you integrate your apps with deployed agents

Quick Example

Install the CLI:
curl -fsSL https://superserve.ai/install | sh
Deploy your agent:
superserve login
superserve deploy agent.py
Set secrets and run:
superserve secrets set my-agent ANTHROPIC_API_KEY=sk-ant-...
superserve run my-agent
You > What is the capital of France?

Agent > The capital of France is Paris.

Completed in 1.2s

You > And what's its population?

Agent > Paris has approximately 2.1 million people in the city proper.

Completed in 0.8s

How It Works

1

Deploy your agent

Run superserve deploy agent.py to package and upload your agent code. Superserve analyzes dependencies and builds a container image.
2

Start a session

When you run superserve run my-agent, Superserve spins up an isolated sandbox with a fully persistent environment.
3

Send messages

Your messages are sent to the agent running in the isolated environment. The agent can execute code, make HTTP requests, and use tools.
4

Stream responses

Agent responses stream back in real-time via Server-Sent Events. You see tokens and tool calls as they happen.
5

Resume anytime

Session state persists across turns. Exit the session and reconnect later - your workspace and conversation history are still there.

Use Cases

Deploy agents to production without managing infrastructure. Superserve handles isolation, scaling, and monitoring.
Each user gets their own isolated session with persistent state. Perfect for SaaS products with agent-powered features.
Run untrusted code safely in isolated sandboxes. The agent can’t access your infrastructure or other sessions.
Agents can work on tasks across multiple turns and days. The persistent workspace means nothing gets lost.
Use any agent framework or write your own. Superserve works with Claude Agent SDK, OpenAI Agents SDK, LangChain, Mastra, Pydantic AI, and custom implementations.

Next Steps