The work changed.

You're still the bottleneck.

You use Claude Code, Cursor, or Codex. You know agents can write, review, and ship code. But you're still babysitting: one terminal, one task, tied to your machine. Terrarium is the cloud platform that turns capable agents into living software.

You're in.

We'll reach out when your spot is ready.

237 builders on the waitlist
Interconnected glass terrariums floating on clouds, connected by golden circuit bridges, each hosting an autonomous AI agent

Engineering is processes. We spent our careers writing code by hand, not because that's the job, but because there was no other way. Agents changed that. Now the work is specification and validation. Define what needs to happen. Verify that it did. Everything in between shouldn't need you.

The Problem

You have a backlog of twenty tickets and one pair of hands. You could parallelize the work (the agents are capable) but nothing connects the pieces. So you sit there: one terminal, one task, one bottleneck.

  • One task at a time. Devin gives you one AI engineer. Cursor gives you one IDE. You can't babysit five agents in five tabs. Your attention is the constraint, not compute.
  • Your machine is the ceiling. Cursor and Codex spawn processes, browsers, and tools on your laptop. RAM runs out. CPUs spike. Everything runs with your permissions, on your files, with no isolation.
  • You're still the glue. Watch terminal. Copy output. Paste into next prompt. Integration work that should be automated.
  • Setup doesn't compound. Every project, same ceremony. The workflows you refine aren't portable. The skills you build aren't connected.
  • No system for quality. You review every line because there's no process around it. No gates, no checkpoints, no audit trail.
An overwhelmed robot tangled in wires inside a cramped, overloaded terrarium
A calm robot in a healthy terrarium floating on clouds, surrounded by other cloud-hosted terrariums

The Runtime

Every task gets its own cloud sandbox. You configure an agent once: tools, skills, model. That configuration becomes a reusable unit, no LangGraph pipelines or CrewAI boilerplate required. Any model through OpenRouter, not just Claude. One thread, one task, one specialized agent.

  • Cloud sandboxes, not your laptop. Each agent runs isolated. Your RAM, your files, your permissions stay untouched.
  • Any model, any provider. Claude, GPT, Gemini. Route to the right model per step through OpenRouter. No lock-in.
  • Persistent and recoverable. Sessions survive between messages. Workspaces snapshot automatically. Close your laptop, come back tomorrow.
  • Tools configured on boot. MCP servers, skills, and integrations are ready before the first message.

The Platform

The runtime gives you sandboxes. The platform connects them, and it's designed to configure itself. Tell an agent what you need. It builds the rest.

Pip, a friendly brass robot assistant at a control panel configuring other agents
01 — The Operator

Meet Pip

Tell Pip what you need. "Turn my Linear tickets into pull requests." Pip configures the agents, finds the right MCP tools, writes custom skills, and wires the workflow end to end. You review the result. The platform is designed to be operated by its own agents.

Friendly robots at workstations with tools, skills, and browser automation
02 — Configure Once

Agents & Threads

Define an agent with its tools, skills, and model. That configuration becomes a reusable unit. Deploy it across projects, workflows, and automations. Each thread is one task, one agent, persistent state. Configure once, run forever.

Visual workflow editor with agent steps, approval gates, and parallel execution paths
03 — Connect the Pieces

Workflows

Chain agents into multi-step pipelines. Fan-out for parallel work. Route outputs from one step into the next. Approval gates pause for human decisions. You stop being the glue — the workflow handles the handoffs.

Automated factory with conveyor belts, clock mechanisms, and webhook triggers
04 — Run Without You

Automations

Webhooks and cron schedules. Your agents react to GitHub pushes, Stripe events, or a daily trigger. Define the trigger, define the response, walk away. The work happens whether you're watching or not.

Command center with brass mailbox, glowing approval envelopes, and status board
05 — Decide, Don't Babysit

Inbox

Workflows pause at approval gates. Pending decisions surface in one place: your inbox. Review context, approve or reject, resume. You intervene at checkpoints, not at every step.

Cloud Access

When agents run on your machine, they run with your credentials, your files, your permissions. One prompt injection away from ~/.ssh. That's why people buy dedicated hardware just to isolate them. Terrarium solves this by design.

  • Ephemeral by default. Each agent gets a fresh sandbox. When the task ends, the sandbox dies. No residual access, no lingering processes, nothing to find.
  • Scoped secrets. Agents only receive the credentials they need for their specific task. Your API keys, tokens, and environment variables stay in the platform, not in the sandbox.
  • No exposed surface. Sandboxes aren't addressable from the internet. No open ports, no SSH tunnels, no attack surface. By the time anyone finds them, they're already gone.
  • Bring your own keys. No markup on tokens. You control your AI spend, your model selection, and your provider relationships.
A relaxed copper robot on a bench with a tablet, controlling cloud terrariums floating above via wireless signal

Built by its own agents.

This platform maintains itself with the same workflows it gives you. We didn't build a tool. We built the system we use every day. Free during beta.

You're in.

We'll reach out when your spot is ready.

Questions

What is Terrarium?

A cloud platform for running AI agent workflows. Think of what Zapier and N8N do for API automation, but for AI coding agents. Each agent gets its own isolated sandbox with configurable tools, skills, and model routing. Chain them into multi-step workflows with approval gates, automate them with webhooks and cron schedules, and manage everything from a single inbox.

How is this different from Cursor, Codex, or Devin?

Cursor and Codex run on your machine, one task at a time. Devin is cloud-based but gives you a single AI engineer. Terrarium is the layer above: it orchestrates multiple agents in parallel with workflows and approval gates. You keep using your favorite agents. Terrarium gives them structure.

What AI models does Terrarium support?

Any model available through OpenRouter, including Claude, GPT, and Gemini. You can route different models per workflow step, using powerful models for reasoning and cheaper models for mechanical tasks. Bring your own API keys with no markup on tokens.

How does this compare to LangGraph or CrewAI?

LangGraph, CrewAI, and Mastra are code frameworks. You write Python or TypeScript to define agent pipelines, then host them yourself. Terrarium is a managed platform with a visual workflow builder. No code required for orchestration, and every agent runs in an isolated cloud sandbox. Terrarium is where you go when you've outgrown scripts but don't want to run infrastructure.

Is it secure?

Secure by design. Agents run in ephemeral sandboxes that are not addressable from the internet. Secrets are scoped per-task and never stored in the sandbox. When an agent finishes, the sandbox is destroyed. No open ports, no SSH tunnels, no persistent attack surface.