AI University/Docs

Agent Communication & Memory

Agents coordinate through 5 async talkspace channels, a typed event bus with 12 event schemas, filtered awareness indexes, and persistent memory (lessons, goals, strengths, research, plans).

Last updated: 2026-03-02

The coordination challenge

When you have 15 agents running independently, they need ways to share information without stepping on each other. The outreach agent shouldn't email a lead that the reply-handler is already processing. The competitor-watch agent's findings should reach the marketing strategist. The content engine should know what campaigns are running.

But these agents don't share memory by default. Each claude -p subprocess starts fresh. There's no shared heap, no global state, no magical telepathy between processes. Every agent wakes up, does its job, and exits. If you don't build explicit coordination, you get agents that duplicate work, contradict each other, and operate on stale information.

You need explicit coordination mechanisms.

AI University's growth system uses four:

  1. Talkspaces — async channels for conversation between agents
  2. Event bus — structured events with schema validation
  3. Awareness index — a filtered world model injected into each agent's context
  4. Persistent memory — per-agent lessons, goals, and research that survive across runs

Each solves a different problem. Talkspaces handle the fuzzy, conversational stuff — "hey, I noticed this lead seems disengaged." The event bus handles structured signals — "lead X converted to the pro plan." The awareness index gives each agent a tailored snapshot of what's happening in the system. And persistent memory gives agents continuity — they remember what worked last week.

Let's walk through each one.

Talkspaces: async agent channels

Talkspaces are like Slack channels for agents. Each channel has a purpose, a member list, and a running thread of messages. Agents post findings, ask questions, flag issues, and coordinate work — all asynchronously through JSON files on disk.

Here are the channel definitions:

// Channel definitions — talkspace-store.ts
const CHANNEL_DEFAULTS = {
  strategy: {
    name: "Strategy",
    purpose: "High-level strategy discussions, goal alignment, priority changes",
    members: "all",
  },
  "outreach-review": {
    name: "Outreach Review",
    purpose: "Review outreach performance, share email insights, discuss lead quality",
    members: ["outreach", "reply-handler", "onboarding", "retention", "win-back"],
  },
  "competitive-intel": {
    name: "Competitive Intel",
    purpose: "Share competitor findings, discuss positioning, market changes",
    members: ["competitor-watch", "competitor-gap", "marketing-strategist", "content-engine"],
  },
  "content-collab": {
    name: "Content Collaboration",
    purpose: "Coordinate content creation, review drafts, align messaging",
    members: ["content-engine", "growth-analyst", "campaign-manager", "marketing-strategist"],
  },
  "agent-learning": {
    name: "Agent Learning",
    purpose: "Share learnings and techniques that make all agents more effective",
    members: "all",
  },
};

Five channels. Each with a specific purpose and a defined member list.

The design is intentional. strategy and agent-learning are open to all agents — every agent should know about strategic shifts and every agent can benefit from shared learnings. The other three channels restrict membership to the agents that actually need to coordinate. The outreach agent doesn't need to see content collaboration discussions. The content engine doesn't need to see outreach review threads.

Messages are plain JSON files stored on disk. No external service required — no Redis, no RabbitMQ, no Kafka. Just the filesystem. Each channel auto-prunes to 100 messages, keeping pinned messages and the most recent ones. This means agents always have relevant context without the message history growing unbounded.

Here's what a message looks like:

export interface TalkMessage {
  id: string;
  agentId: string;
  content: string;
  postedAt: number;
  replyTo?: string;     // threading support
  pinned: boolean;       // important messages stay visible
  reactions: TalkReaction[]; // agents can agree/disagree/mark important
}

A few things to notice. Messages support threading via replyTo — an agent can respond to a specific message rather than just posting to the channel. Messages can be pinned, which means they survive pruning. If the marketing strategist pins a message saying "Q2 priority: enterprise leads only," that message stays visible even as the channel fills up with newer messages. And agents can react to messages — agreeing, disagreeing, or marking something as important.

How agents actually use talkspaces

An agent posts findings with post_to_talkspace. Say the competitor-watch agent discovers that a competitor just dropped their pricing:

post_to_talkspace("competitive-intel", "competitor-watch",
  "Cursor just dropped their Pro plan from $40/mo to $20/mo.
   Source: pricing page change detected at 2:14am.
   This is aggressive — they're clearly going after the hobbyist segment.")

Other agents read with read_talkspace. But here's the key part: agents don't have to explicitly read. The context builder includes a talkspace digest — a summary of recent messages from channels the agent belongs to. So when the marketing strategist runs, it automatically sees the competitor-watch agent's pricing alert in its context, even without explicitly calling read_talkspace.

This is the difference between pull and push. Agents can pull messages when they need to deep-dive, but the important stuff gets pushed into their context automatically.

Event bus: structured inter-agent events

Talkspaces are conversational. The event bus is mechanical. It handles structured, machine-readable signals that agents emit and consume.

The distinction matters. A talkspace message might say "I think this lead is losing interest based on their email patterns." An event says { type: "reply-received", email: "alex@example.com", classification: "objection" }. One is nuanced analysis. The other is a discrete signal that other agents can programmatically act on.

Here's the schema registry:

// Event schema registry — event-bus.ts
const EVENT_SCHEMAS = {
  "contact-made":        { required: ["email", "method"], description: "Agent contacted a lead" },
  "lead-converted":      { required: ["email", "plan"], description: "Lead converted to subscriber" },
  "reply-received":      { required: ["email", "classification"], description: "Email reply received" },
  "competitive-insight": { required: ["competitor", "summary", "source", "credibility"],
                           description: "Competitive finding" },
  "pricing-change":      { required: ["competitor", "oldPrice", "newPrice", "source"],
                           description: "Competitor pricing change" },
  "feature-launch":      { required: ["competitor", "feature", "summary"],
                           description: "Competitor launched a feature" },
  "customer-intel":      { required: ["summary", "source", "platform"],
                           description: "Customer signal found" },
  "reddit-insight":      { required: ["summary", "source", "credibility", "score"],
                           description: "Reddit finding" },
  "github-trend":        { required: ["repoName", "stars", "summary"],
                           description: "GitHub trending repo" },
  "task-created":        { required: ["taskId", "assignedTo", "title"],
                           description: "Marketing task created" },
  "content-ready":       { required: ["draftId", "contentType", "title"],
                           description: "Content draft completed" },
  "campaign-created":    { required: ["campaignId", "targetSegment"],
                           description: "Email campaign created" },
};

Twelve event types. Each with required fields. If an agent tries to emit a pricing-change event without including oldPrice, it gets rejected. This is enforcement, not suggestion — the validateEvent() function checks every event before it's written to disk.

Why is this important? Because without schema validation, you end up with garbage data. An agent might emit a contact-made event with just a name and no email. Downstream agents that consume that event expect an email field. They break. Schema validation catches this at the source.

Emit and validate

Here's the emit flow:

// Emitting an event — event-bus.ts
export function emitEvent(fromAgent, type, payload, toAgent?) {
  // Validate against schema
  const validation = validateEvent(type, payload);
  if (!validation.valid) {
    return { error: validation.error };  // Rejected — missing required fields
  }

  const event = { id: uuid(), timestamp: Date.now(), fromAgent, toAgent, type, payload };
  appendFileSync(EVENTS_FILE, JSON.stringify(event) + "\n");
  return event;
}

Events are stored as JSONL — one JSON object per line, appended to a single file. This is an append-only log. No locking needed because appendFileSync is atomic for small writes on most filesystems. No read-modify-write cycle. Just append.

Each event has fromAgent (who sent it), toAgent (who it's for — null means broadcast to everyone), type, and payload.

Cursor-based consumption

Agents don't re-read the entire event log every run. They use cursors:

// Reading unconsumed events — event-bus.ts
export function readUnconsumedEvents(forAgent: string, limit = 50) {
  const cursor = getCursor(forAgent);  // last read timestamp
  return allEvents
    .filter((e) => e.timestamp > cursor && (e.toAgent === null || e.toAgent === forAgent))
    .slice(0, limit);
}

export function markConsumed(agentId: string) {
  // Advances cursor to now — next read only gets newer events
  writeFileSync(cursorFile, JSON.stringify({ lastRead: Date.now() }));
}

Each agent has a cursor — a timestamp of when it last read events. When the outreach agent runs, it calls readUnconsumedEvents("outreach") and gets only events that happened since its last run. After processing, it calls markConsumed("outreach") to advance its cursor.

The filter is also smart about targeting. An event with toAgent: null is a broadcast — every agent sees it. An event with toAgent: "outreach" is targeted — only the outreach agent sees it. This means the reply-handler can send a direct signal to the outreach agent ("stop contacting this lead, they asked to unsubscribe") without polluting every other agent's event stream.

Awareness index: filtered world model

Each agent needs to know what's happening in the system. But showing every agent the full state of all 15 agents, all events, all talkspace messages — that's too much. It wastes context window space and confuses the agent with irrelevant information.

Instead, each agent gets a filtered world view tailored to what it actually needs. Here's the configuration:

// Who each agent collaborates with — context-config.ts
const AGENT_COLLABORATORS = {
  "outreach": ["reply-handler", "onboarding", "marketing-strategist"],
  "competitor-watch": ["competitor-gap", "marketing-strategist", "content-engine"],
  "content-engine": ["marketing-strategist", "competitor-watch", "growth-analyst"],
  // ...
};

// Event types each agent cares about
const AGENT_EVENT_INTERESTS = {
  "outreach": ["contact-made", "lead-converted", "reply-received"],
  "competitor-watch": ["competitive-insight", "pricing-change", "feature-launch"],
  "content-engine": ["customer-intel", "reddit-insight", "competitive-insight"],
  // ...
};

// Talkspace channels each agent checks
const AGENT_CHANNELS = {
  "outreach": ["strategy", "outreach-review"],
  "competitor-watch": ["strategy", "competitive-intel"],
  "content-engine": ["content-collab", "competitive-intel", "agent-learning"],
  // ...
};

Three maps. Collaborators, event interests, and channels. Together, they define what each agent sees.

The outreach agent only sees the status of the reply-handler, onboarding agent, and marketing strategist — the three agents it actually coordinates with. It only sees contact-made, lead-converted, and reply-received events. It only checks the strategy and outreach-review talkspace channels.

The competitor-watch agent gets a completely different view. Different collaborators, different events, different channels. Same system, different lens.

How the awareness index gets built

The context builder uses these maps to assemble a concise awareness block:

// Smart awareness builder — context-builder.ts
export function buildSmartAwareness(agentId: string, awareness: AwarenessIndex) {
  const lines = [];
  lines.push(`System: ${awareness.systemHealth} | Conversions (24h): ${awareness.recentConversions}`);

  // Only show collaborator agents
  const collabs = AGENT_COLLABORATORS[agentId] || [];
  for (const colId of collabs) {
    const col = awareness.agents[colId];
    lines.push(`  [${colId}] ${col.lastRunResult} | ${lastRun}${recentWork}`);
  }

  // Only show relevant events
  const interests = AGENT_EVENT_INTERESTS[agentId] || [];
  const relevantEvents = awareness.recentEvents
    .filter((e) => interests.includes(e.type))
    .slice(-8);

  return lines.join("\n");
}

The output is a short, dense block of text that gets injected into the agent's system prompt. Something like:

System: healthy | Conversions (24h): 7
  [reply-handler] success | 2h ago | processed 12 replies
  [onboarding] success | 4h ago | onboarded 3 new users
  [marketing-strategist] success | 6h ago | updated Q2 strategy doc
Recent events:
  - contact-made: sarah@acme.com via email (outreach, 1h ago)
  - reply-received: mike@startup.io, classification: interested (reply-handler, 3h ago)
  - lead-converted: jess@corp.dev to pro plan (onboarding, 5h ago)

That's maybe 10 lines of context. Enough for the agent to understand the current state of its world without drowning in information about agents it never interacts with.

This filtering is crucial for context window management. If you dumped the full state of 15 agents, all their recent events, and all talkspace messages into every agent's prompt, you'd burn thousands of tokens on irrelevant context. The awareness index keeps it tight — usually under 500 tokens per agent.

Persistent memory

Everything discussed so far is about cross-agent communication. Persistent memory is about self-knowledge. Each agent has its own memory file that survives across runs, letting it build up an understanding of what works, what doesn't, and what it's trying to accomplish.

Here's the structure:

// Agent memory structure — agent-memory-store.ts
export interface AgentMemory {
  agentId: string;
  lastUpdated: number;
  lessons: Lesson[];       // what worked, what failed, observations
  strengths: string[];     // self-assessed capabilities
  weaknesses: string[];    // areas for improvement
  goals: Goal[];           // active objectives with progress tracking
  research: Research[];    // findings from investigations
  plans: Plan[];           // current strategies and approaches
}

Six sections, each serving a different purpose.

Lessons

Lessons are the core of agent learning. Each lesson is categorized as success, failure, observation, or improvement, and tagged with an importance level: low, medium, or high.

When the outreach agent discovers that emails with case studies get 3x more replies than generic pitches, that's a high-importance success lesson. When it learns that emailing before 9am has lower open rates, that's a medium-importance observation. When a cold email approach completely bombs, that's a failure lesson.

The system auto-prunes to 50 lessons per agent. But pruning isn't just "delete the oldest." It's smarter:

// Smart pruning — keeps important lessons, drops old low-importance ones
if (memory.lessons.length > 50) {
  memory.lessons.sort((a, b) => {
    const importanceOrder = { high: 3, medium: 2, low: 1 };
    if (importanceOrder[a.importance] !== importanceOrder[b.importance])
      return importanceOrder[b.importance] - importanceOrder[a.importance];
    return b.learnedAt - a.learnedAt; // newer first within same importance
  });
  memory.lessons = memory.lessons.slice(0, 50);
}

High-importance lessons survive indefinitely. Low-importance old lessons get dropped first. This means an agent's memory naturally accumulates its most valuable insights while shedding the noise.

Strengths and weaknesses

These are the agent's self-assessment. The outreach agent might list "writing personalized first-touch emails" as a strength and "handling technical objections" as a weakness. This isn't just vanity — when the orchestrator needs to decide which agent should handle a task, these self-assessments help route work appropriately.

Strengths and weaknesses update over time as the agent processes more lessons. An agent that keeps failing at a particular task type will eventually add it to its weaknesses list. One that consistently succeeds at something will recognize it as a strength.

Goals

Active objectives with progress tracking. Each goal has a status: active, achieved, or abandoned. Goals give agents direction across runs. Without them, an agent would approach each run from scratch with no continuity of purpose.

The outreach agent might have goals like "increase reply rate from 4% to 8%" or "build 50 high-quality leads in the SaaS segment." Each run, the agent checks its goals, sees its progress, and adjusts its approach.

Research

Findings from investigations. The competitor-watch agent accumulates research about competitor positioning, pricing strategies, and feature launches. The content engine accumulates research about what topics resonate with the audience. Each research entry includes source attribution — where the finding came from and when.

Plans

Current strategies and approaches. Limited to 10 active plans per agent. Plans are higher-level than individual actions — they describe an approach the agent is taking. "Focus outreach on companies using competitor X" or "Create a blog series comparing AI coding tools."

Plans give agents strategic continuity. Without them, an agent might flip strategies every run based on the most recent data point. With plans, the agent commits to an approach and evaluates it over multiple runs before abandoning it.

Performance digest: the feedback loop

Persistent memory gives agents knowledge about themselves. But they also need feedback on the impact of their work.

Content, campaign, and growth agents get a performance digest injected into their context each run. This digest shows metrics on the content they've created, the campaigns they've launched, and the growth patterns they've identified.

The loop works like this:

  1. Agent creates content (blog post, email campaign, social thread)
  2. Performance is measured over subsequent days (views, clicks, conversions, replies)
  3. Next time the agent runs, performance data is included in its context
  4. Agent sees what worked and what didn't
  5. Agent adjusts its approach for the next piece of content

This is how agents get better over time without explicit retraining. The content engine doesn't need someone to tell it "write more technical deep-dives." It sees that its technical posts got 4x more engagement than listicles, records a lesson, and shifts its strategy.

The feedback loop closes the gap between action and outcome. Without it, agents operate blindly — producing content, sending emails, running campaigns, but never learning from the results.

How it all fits together

Here's the full picture:

┌──────────────────────────────────────────────────────┐
│              AGENT COORDINATION LAYER                 │
│                                                      │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐ │
│  │  Talkspaces  │  │  Event Bus  │  │  Awareness   │ │
│  │  (async      │  │  (structured │  │  (filtered   │ │
│  │  channels)   │  │  signals)    │  │  world view) │ │
│  └──────┬───────┘  └──────┬──────┘  └──────┬───────┘ │
│         │                 │                │          │
│         └────────┬────────┘────────┬───────┘          │
│                  │                 │                   │
│           ┌──────▼──────┐  ┌──────▼──────┐           │
│           │  Context     │  │  Persistent  │           │
│           │  Builder     │  │  Memory      │           │
│           └──────┬──────┘  └──────┬──────┘           │
│                  │                 │                   │
│                  └────────┬───────┘                    │
│                           │                            │
│                    ┌──────▼──────┐                     │
│                    │ Agent Prompt │                     │
│                    └─────────────┘                     │
└──────────────────────────────────────────────────────┘

Four mechanisms, each handling a different type of coordination:

Talkspaces handle unstructured conversation. When an agent has a nuanced observation, a strategic suggestion, or a question for other agents, it posts to a talkspace. Other agents see it in their context digest or read it explicitly. This is the "water cooler" of the system — where insights get shared in natural language.

The event bus handles structured signals. Discrete, schema-validated events that other agents can programmatically react to. "Lead converted." "Competitor changed pricing." "Content draft ready." No ambiguity, no interpretation needed — just facts with required fields.

The awareness index provides a filtered snapshot of system state. Each agent sees only the collaborators, events, and channels relevant to its role. This keeps context tight and prevents agents from getting confused by information they don't need.

Persistent memory gives each agent continuity across runs. Lessons learned, goals being pursued, research accumulated, strategies in play. Without this, every agent run would start from zero.

The context builder is where everything converges. It takes talkspace digests, relevant events, the filtered awareness snapshot, and the agent's persistent memory, and assembles them into a single block that gets injected into the agent's system prompt. When the agent wakes up, it immediately knows: what's happening in the system, what its collaborators are doing, what events it needs to process, what it learned last time, and what goals it's working toward.

That's the coordination layer. No shared database. No message broker. No external services. Just JSON files on disk, smart filtering, and a context builder that knows what each agent needs to see.

The result: 15 agents that operate independently but act like a coordinated team. Each one only wakes up for a few minutes at a time, but between persistent memory and the coordination layer, it picks up exactly where it left off — with full awareness of what happened while it was asleep.