What Is Claude Code?
Claude Code is a terminal-based AI agent — not an IDE plugin. It runs as a CLI process, uses tools via MCP, and can operate headlessly. This makes it uniquely suited for autonomous agent systems.
The short version
Claude Code is Anthropic's official CLI for Claude. It is a terminal-based coding agent — not a VS Code extension, not a web app, not a plugin. You open your terminal, type claude, and start talking to it. It reads your codebase, reasons about what to do, edits files, runs commands, and builds things. You can use it interactively (like pair programming in your terminal) or you can run it headlessly with claude -p "do something" and walk away. That headless mode is what makes Claude Code fundamentally different from tools like Cursor or Windsurf. When your AI agent runs as a standalone process, you can script it, automate it, chain it into pipelines, run it on schedules, and spawn fleets of them in parallel. That is the unlock.
How Claude Code works
Claude Code follows the same agent loop that every AI agent uses, but it is wired directly into your development environment. The loop is: read context, reason about what to do, use tools to take action, then repeat.
Here is what each step looks like in practice:
-
Read context — Claude Code scans your codebase. It reads files, checks git history, looks at project structure, examines recent changes. It builds a mental model of what you are working with.
-
Reason — Using the Claude LLM as its brain, it figures out what to do next. Should it edit a file? Run a test? Search for a function? Create something new? The reasoning happens in the model — you see the output as a plan or explanation.
-
Use tools — Claude Code acts by calling tools. Its built-in tools include: read files, edit files, write new files, run bash commands, search code with grep, find files with glob, and navigate your project. These are not toy tools — they are the same operations you do manually, just executed by the agent.
-
Repeat — After each action, Claude Code observes the result and decides what to do next. Did the test pass? Did the file save correctly? Is there more work to do? It loops until the task is done or it needs your input.
┌─────────────────────────────────────────────┐
│ CLAUDE CODE LOOP │
│ │
│ Read context ──► Reason ──► Use tools │
│ ▲ │ │
│ └─────────────────────────┘ │
│ (repeat until done) │
└─────────────────────────────────────────────┘
One important detail: Claude Code operates within a permission model. By default, when it wants to run a bash command or edit a file, it asks you first. You see what it intends to do, and you approve or deny. This keeps you in control during interactive sessions. You can also pre-approve certain tools or categories of actions so you are not clicking "yes" on every file read.
The permission model is not a limitation — it is a feature. It means you can trust the agent with your codebase because you control what it can actually do. And when you run it in automation, you can bypass permissions explicitly (more on that below).
Interactive vs headless mode
Claude Code has two fundamentally different modes, and understanding them is the key to understanding why it matters.
Interactive mode
claude
You launch it, and you are in a conversation. You type a request, Claude Code responds, asks for permission to take actions, and you iterate together. It is like pair programming with a very fast, very knowledgeable partner who can read your entire codebase.
This is great for:
- Exploring unfamiliar codebases ("explain how the auth flow works")
- Building features iteratively ("add a dark mode toggle to the settings page")
- Debugging ("this test is failing, figure out why")
- Refactoring ("extract this logic into a shared utility")
Headless mode
claude -p "Write unit tests for the payment module"
This is where things get interesting. The -p flag runs Claude Code in headless mode — no interactive terminal, no human in the loop. You give it a prompt, it executes, and it returns the result when done. It runs as a subprocess that you can spawn from any script, cron job, CI pipeline, or orchestrator.
Headless mode is what turns Claude Code from a development tool into an infrastructure component. You are no longer pair-programming — you are deploying autonomous agents.
Key flags for headless mode
Here are the flags that matter when running Claude Code programmatically:
# Basic headless execution
claude -p "your task here"
# Inject a system prompt (agent identity, rules, context)
claude -p "task" --append-system-prompt "You are a code review agent..."
# Connect to MCP servers for custom tools
claude -p "task" --mcp-config ./mcp-config.json
# Limit how many turns the agent can take
claude -p "task" --max-turns 25
# Skip permission prompts (required for automation)
claude -p "task" --dangerously-skip-permissions
# Get structured JSON output instead of plain text
claude -p "task" --output-format stream-json
# Choose the model
claude -p "task" --model sonnet
A few notes on these:
--dangerously-skip-permissions is named that way on purpose. It lets the agent run bash commands and edit files without asking. You should only use this in controlled environments — automated pipelines, sandboxed containers, or systems where the agent has been thoroughly tested.
--output-format stream-json emits one JSON object per line as the agent works. This is critical for monitoring — you can parse the stream to track tool calls, count turns, detect errors, and measure what the agent actually did.
--mcp-config connects Claude Code to MCP (Model Context Protocol) servers, which is how you give it custom tools beyond its built-in file and bash operations. More on this in the tools section.
--model sonnet lets you pick which Claude model powers the agent. Sonnet is fast and cheap. Opus is smarter but slower. For most autonomous tasks, Sonnet is the right choice.
Claude Code vs Cursor vs Windsurf
This is the comparison that matters. All three are AI coding tools, but they are built for different use cases.
| Feature | Claude Code | Cursor | Windsurf |
|---|---|---|---|
| Interface | Terminal CLI | VS Code fork | VS Code fork |
| How it runs | Standalone process | IDE extension | IDE extension |
| Headless mode | Yes (claude -p) | No | No |
| MCP support | Native (stdio) | Limited | Limited |
| Custom tools | Full MCP servers | Extension API | Extension API |
| Multi-agent | Spawn N processes | Single session | Single session |
| Automation | Script with bash | Manual | Manual |
| Best for | Autonomous systems, CI/CD, multi-agent | Interactive coding | Interactive coding |
Cursor and Windsurf are excellent for interactive coding. You sit in the IDE, you pair-program with AI, you iterate on features in real time. If that is your workflow, those tools are great. Claude Code can do interactive coding too — it is just in a terminal instead of a GUI.
But here is the gap: Cursor and Windsurf cannot run headlessly. You cannot spawn a Cursor instance from a cron job. You cannot orchestrate 15 Windsurf sessions in parallel. You cannot pipe the output of one Cursor run into another. They are designed for a human sitting at a screen.
Claude Code's killer feature is claude -p. When your AI coding agent runs as a CLI subprocess, it becomes a building block you can compose into larger systems. You can spawn agents on schedules, run them in CI/CD pipelines, orchestrate fleets of them working in parallel on different tasks, and parse their structured output to make decisions about what to do next.
That is why AI University chose Claude Code as the foundation for its agent system. Not because it is better at interactive coding (all three are good), but because it is the only one you can build autonomous systems on top of.
How AI University runs 15 agents
This is not a hypothetical architecture. AI University runs 15 specialized agents in production, all powered by Claude Code's headless mode. Each agent is a claude -p subprocess spawned by a TypeScript orchestrator.
The agents
Here is the roster:
- outreach — finds qualified leads and sends personalized email sequences
- reply-handler — monitors incoming replies and crafts contextual responses
- onboarding — guides new subscribers through their first experience
- retention — monitors subscriber health and intervenes before churn
- win-back — re-engages cold leads who went silent
- competitor-watch — tracks competitor activity and surfaces changes
- marketing-strategist — analyzes performance data and recommends strategy shifts
- content-engine — creates educational content based on trending topics
- growth-analyst — identifies growth opportunities from data patterns
- campaign-manager — plans and executes marketing campaigns
- partnership-agent — identifies and evaluates potential partners
- competitor-gap — finds gaps between what competitors offer and what you do
- brain-maintenance — cleans up memory, prunes stale data, optimizes context
- linkedin-prospector — finds high-value prospects on LinkedIn
- ai-trend-monitor — tracks AI industry trends and generates insights
Each agent has a distinct role, its own system prompt, and access to the same pool of 52 MCP tools (email, database, web search, content management, analytics, and more).
How each agent is spawned
The orchestrator builds a system prompt for each agent (including its role, memory, current context, and task list), writes it to a temp file, and spawns a Claude Code subprocess:
// How each agent is spawned — orchestrator.ts
const claudeArgs = [
"-p", userPrompt,
"--append-system-prompt", readFileSync(promptPath, "utf-8"),
"--mcp-config", MCP_CONFIG_PATH,
"--max-turns", "25",
"--dangerously-skip-permissions",
"--output-format", "stream-json",
"--verbose",
"--model", "sonnet",
];
const proc = spawn("claude", claudeArgs, {
env: { ...cleanEnv, AGENT_ID: agentId },
cwd: PROJECT_ROOT,
stdio: ["pipe", "pipe", "pipe"],
});
Every agent gets the same structural setup: a user prompt telling it to execute its workflow, a system prompt defining who it is and what it knows, an MCP config connecting it to all 52 tools, a 25-turn limit to prevent runaway loops, and structured JSON output for monitoring.
The AGENT_ID environment variable lets the MCP server know which agent is making tool calls — important for logging, rate limiting, and guardrails.
Running agents in parallel
The orchestrator does not run agents one at a time. It fires them all in parallel using Promise.allSettled():
// Running agents in parallel — orchestrator.ts
const settled = await Promise.allSettled(
toRun.map(async (id) => {
const result = await runAgentViaCLI(id);
return { id, result };
}),
);
Promise.allSettled is the right choice here because if one agent fails, you do not want the others to stop. Each agent is independent — the outreach agent crashing should not prevent the content engine from publishing.
Before running, the orchestrator checks each agent with a shouldAgentRun() function. This is smart scheduling: the reply-handler only runs if there are unhandled replies. The outreach agent only runs if there are leads to contact. The competitor-watch agent only runs if it has not run in the last four hours. This prevents wasted cycles and keeps the system efficient.
The cost model
Here is the part that surprises people: it costs $0 in API tokens. AI University runs on a Claude Max subscription, which gives unlimited Claude usage for a flat monthly fee. The claude -p subprocess uses the same subscription — no per-token billing. You can run 15 agents in parallel, every few hours, all month, and the cost does not change.
This economics shift is what makes autonomous agent systems practical. If every agent run cost $2-5 in API tokens, you would think hard about how often to run them. At $0 per run, you just run them whenever there is work to do.
What you will learn in this section
This page is the starting point. The Claude Code section covers everything you need to go from understanding what Claude Code is to building your own agent systems on top of it.
Here is the learning path:
CLAUDE.md and Project Memory — How Claude Code remembers things between sessions. The CLAUDE.md file system is how you give your agents persistent identity, rules, and context. This is foundational for any agent that runs more than once.
Building Claude Code Skills — Skills are executable capabilities you give your agents — Python scripts and meta-prompts that extend what an agent can do. Learn the pattern: SKILL.md + scripts/*.py, and how agents invoke them at runtime.
Claude Code vs Cursor vs Windsurf — A deeper comparison of the three tools, with specific guidance on when to use which. If you are deciding between them (or using them together), this is the page.
MCP Tools and Configuration — How to build the MCP servers that give your agents custom tools. The 52-tool system AI University uses is built on MCP, and you can build your own.
Orchestrating Multi-Agent Systems — The architecture for running multiple agents in parallel: scheduling, coordination, error handling, and monitoring. This is the advanced material for when you are ready to go from one agent to many.
Start with the next page on CLAUDE.md and project memory — that is the concept you will use most immediately, whether you are building a single agent or a fleet.