AI University/Docs

Content Operations Agents

Learn how to build a production content operations system using four specialized agents: a Trend Monitor that scans Twitter, GitHub, and Reddit for breaking signals; a Content Engine that writes posts matching your brand voice; a Publishing Agent that handles LinkedIn with rate limit awareness; and a Competitive Watch agent that tracks gaps and opportunities. Includes TypeScript code, tool configurations, brand voice enforcement, and real metrics from The AI University's own content pipeline.

Last updated: 2026-03-02

Content Operations Agents

Most content teams are running at a fraction of their potential output — not because they lack good writers, but because the machinery around writing is broken. Monitoring trends manually. Waiting on research. Writing one post at a time. Forgetting to publish. Checking engagement days later when it no longer matters.

AI agents do not replace the judgment at the center of content strategy. They eliminate the mechanical work that surrounds it. The result is a team that publishes faster, monitors more signals, and spends their time on creative decisions instead of logistics.

This guide covers the full content pipeline from signal to publish, the four agents that power it, and the specific implementation decisions — tools, prompts, rate limits, quality checks — that determine whether the system produces content worth reading or garbage at scale.

We run this pipeline ourselves at The AI University. The agents you will see described here are the ones running in production.


The Content Pipeline

Every piece of content passes through six stages. Agents own the stages where speed, volume, and consistency matter. Humans stay in the loop at the stages where judgment, relationships, and creative direction matter.

Monitor      Research     Create       Review       Publish      Analyze
  |            |            |            |             |            |
Scan       Pull sources  Write draft  Quality      Post with    Track
signals,   and context.  matching     check:       rate limit   engagement,
identify   Summarize     brand        facts,       awareness.   surface
breaking   for writer.   voice.       tone,        Optimal      winners,
trends.                               original-    timing.      feed back.
                                      ity.

The Trend Monitor feeds the Content Engine. The Content Engine produces drafts. Drafts go through quality control before the Publishing Agent handles distribution. Engagement data flows back to the Trend Monitor to sharpen what signals it prioritizes.

This is a pipeline pattern at the macro level, with a handoff between each stage. If you are not familiar with these architecture patterns, the Architecture Patterns guide covers the six core structures and when to use each.


Agent 1: Trend Monitor

The Trend Monitor runs on a schedule — every four hours in our system — and scans three sources: Twitter/X for real-time conversation spikes, GitHub trending for what engineers are actually building, and Reddit for community-driven discussions that have not yet broken into mainstream coverage.

The goal is not to track everything. The goal is to surface the three to five signals that matter most to your audience right now, before everyone else is writing about them.

What it does

The agent calls search_twitter for keyword clusters relevant to your domain. It calls search_github_trending to see what repositories are gaining stars. It calls search_reddit with subreddit targeting to catch discussions that have high engagement but low mainstream coverage. It then scores each signal by estimated novelty, relevance to your audience, and velocity — how fast the signal is growing.

Signals that cross a threshold get written to the content backlog with a priority score, the raw source links, and a one-paragraph summary of why this matters to your audience right now.

Tools required

// tools needed by the Trend Monitor agent
const trendMonitorTools = [
  "search_twitter",          // search by keywords, returns recent tweets with engagement metrics
  "search_reddit",           // search by subreddit and keyword, returns posts with score and comment count
  "search_github_trending",  // returns trending repos by language and time window
  "save_content_draft",      // writes signal summaries to the content backlog
];

Prompt design

The system prompt needs to define your domain clearly, specify what novelty means for your audience, and set concrete criteria for what crosses the threshold into the backlog. Vague prompts produce vague rankings.

const trendMonitorSystemPrompt = `
You are a trend monitoring agent for The AI University.

Your audience: developers and technical founders building with AI agents.

Your job: scan Twitter, Reddit (r/MachineLearning, r/LocalLLaMA, r/programming),
and GitHub trending to identify breaking signals in AI tooling, agent frameworks,
model releases, and developer workflow tools.

Score each signal on three dimensions (1-10 each):
- Novelty: Has this been widely covered in our space yet? (10 = totally fresh)
- Relevance: How directly does this affect our audience's work?
- Velocity: How fast is engagement growing in the last 6 hours?

Only save signals with a combined score above 20. Include:
- Source links
- One paragraph explaining why this matters to our audience
- Suggested content angle (not a title, an angle)

Do not save signals about: general business news, politics, consumer products
unrelated to developer tooling, or topics we have covered in the last 7 days.
`.trim();

The exclusion list at the end is load-bearing. Without it, the backlog fills with noise.


Agent 2: Content Engine

The Content Engine picks up tasks from the marketing backlog — which is populated both by the Trend Monitor and by manual additions from the team — and writes posts that match your brand voice.

This is the agent where brand voice enforcement does the most work. The difference between a post that sounds like your company and one that sounds like every other AI newsletter comes down entirely to what you encode in the system prompt.

Brand voice enforcement

Brand voice is not a list of adjectives. It is a set of concrete rules about sentence structure, perspective, what you include, and what you never say. Generic guidance like "be conversational" produces generic content. Specific rules produce recognizable content.

At The AI University, our brand voice rules look like this:

const brandVoiceRules = `
VOICE AND TONE:
- Write like a builder teaching other builders. No cheerleading. No hype.
- Use "you" and "we" — this is a conversation, not a lecture.
- Short sentences. Cut any sentence that can be split. Cut any sentence that
  exists to sound impressive.
- One idea per paragraph. White space is not wasted space.

WHAT WE INCLUDE:
- Specific numbers and timeframes when making claims about performance or results.
- The tradeoff or limitation alongside every capability we describe.
- Code when it makes the concept clearer. Skip code when prose is faster.

WHAT WE NEVER SAY:
- "Revolutionize", "game-changing", "powerful", "robust", "seamless"
- Passive voice constructions that hide the actor ("it was found that...")
- Rhetorical questions as a substitute for making a point
- Predictions about AI replacing entire industries or professions

FORMAT FOR LINKEDIN:
- Hook in the first line — no preamble, no "I wanted to share..."
- Three to five short paragraphs
- End with one concrete thing the reader can do with this information
- No hashtag blocks. One or two relevant hashtags maximum, in the body.
`.trim();

These rules go directly into the Content Engine's system prompt. Every post runs through them at generation time — not as a post-processing check, but as a constraint the agent writes within.

How it handles the backlog

const contentEngineSystemPrompt = `
You are the Content Engine for The AI University.

You receive a content task from the marketing backlog. Each task includes:
- topic: what to write about
- angle: the specific perspective to take
- source_links: primary sources to draw from
- priority: urgency level

Your output is a LinkedIn post draft that:
1. Follows the brand voice rules exactly (provided below)
2. Draws only from the provided sources — do not invent statistics
3. Ends with a call to action that is specific and actionable
4. Is between 150-300 words

${brandVoiceRules}

Call get_brand_context before writing to retrieve any recent posts on this topic.
If we have covered this topic in the last 14 days, adjust the angle to avoid
repetition.

Call save_content_draft when the post is ready. Status: "pending_review".
`.trim();

Tools required

const contentEngineTools = [
  "get_brand_context",   // retrieves recent published posts, brand guidelines, style examples
  "save_content_draft",  // saves draft with status field: pending_review, approved, published
];

Agent 3: Publishing Agent

The Publishing Agent handles a problem that sounds simple and is not: posting to LinkedIn reliably, at the right time, without violating platform limits.

LinkedIn throttles automated posting. If you push too fast, you lose reach. The Publishing Agent enforces a hard rule: no more than two posts per five-hour window. It checks check_linkedin_status before every publish attempt to see the current post count in the window and the time until the window resets.

It also handles timing. Posting on LinkedIn at 6 AM on a Saturday is different from posting at 9 AM on a Tuesday. The agent uses a simple scoring function based on day of week and hour to select the next optimal publish window rather than posting immediately.

Rate limit implementation

const publishingAgentSystemPrompt = `
You are the LinkedIn Publishing Agent for The AI University.

Before publishing any post, you must:
1. Call check_linkedin_status to get current post count and window reset time
2. If posts_in_window >= 2, do NOT publish — log the post as "queued" and halt
3. If posts_in_window < 2, check the optimal timing score for the current hour
4. If timing score < 6 (scale of 1-10), queue for the next high-score window
5. If timing score >= 6 and posts_in_window < 2, proceed to publish

Optimal timing windows (UTC+0):
- Tuesday, Wednesday, Thursday: 07:00-09:00 (score: 10), 11:00-13:00 (score: 9)
- Monday, Friday: 07:00-09:00 (score: 7)
- Weekend: all windows score 3 or below — do not publish

When publishing:
- Retrieve the approved draft from the content backlog
- Call publish_to_linkedin with the post body
- Update the draft status to "published" with timestamp
- Log the post_id returned by LinkedIn for engagement tracking

If publish_to_linkedin returns an error, do not retry immediately.
Log the error and status as "publish_failed" for manual review.
`.trim();

Tools required

const publishingAgentTools = [
  "check_linkedin_status",   // returns: posts_in_window, window_reset_timestamp, account_status
  "publish_to_linkedin",     // returns: post_id, published_at, estimated_reach
  "save_content_draft",      // used to update status field on drafts
];

The error handling at the end matters. Publishing agents that silently retry on failure can double-post, which damages credibility faster than missing a publishing window.


Agent 4: Competitive Watch

The Competitive Watch agent runs weekly and monitors competitor content output. It tracks what topics competitors are covering, how their audience is engaging, and — more importantly — what they are not covering.

Content gaps are the highest-leverage signal the Trend Monitor does not catch. If three competitors are posting heavily about one topic and none of them has addressed a specific angle, that gap is a direct opportunity.

What it monitors

The agent takes a list of competitor LinkedIn profiles and runs a structured analysis: what have they posted in the last seven days, what is getting high engagement on their posts, and what topics in our domain are conspicuously absent from their output.

const competitiveWatchSystemPrompt = `
You are the Competitive Watch agent for The AI University.

Your competitor list: [list of competitor handles and LinkedIn URLs]

Your job this week:
1. Search for recent content from each competitor using search_twitter
2. Identify their top three posts by engagement from the last 7 days
3. Identify topics they have NOT covered that are active in our space
4. Rate each gap by: audience interest (based on discussion volume),
   our ability to cover it credibly, and strategic value

Output a competitive intelligence report with:
- Competitor activity summary (brief)
- Top 5 content gaps, ranked by opportunity score
- Three topic recommendations for our content backlog, with suggested angles

Save recommendations to content backlog with source: "competitive-watch"
and priority based on opportunity score.
`.trim();

Full Pipeline: TypeScript Implementation

Here is how these four agents wire together into a single content operations system. This is the pipeline pattern — each stage hands off to the next with a structured payload.

// content-pipeline.ts
import Anthropic from "@anthropic-ai/sdk";

const client = new Anthropic();

interface ContentSignal {
  topic: string;
  angle: string;
  sourceLinks: string[];
  noveltyScore: number;
  relevanceScore: number;
  velocityScore: number;
}

interface ContentDraft {
  id: string;
  body: string;
  topic: string;
  status: "pending_review" | "approved" | "queued" | "published" | "publish_failed";
  createdAt: number;
  publishedAt?: number;
  linkedinPostId?: string;
}

interface PipelineContext {
  runId: string;
  startedAt: number;
  signals: ContentSignal[];
  drafts: ContentDraft[];
  publishedCount: number;
  errors: string[];
}

// Stage 1: Trend Monitor — discover signals
async function runTrendMonitor(ctx: PipelineContext): Promise<PipelineContext> {
  const response = await client.messages.create({
    model: "claude-opus-4-6",
    max_tokens: 4096,
    system: trendMonitorSystemPrompt,
    messages: [
      {
        role: "user",
        content: `Run your trend scan now. Current time: ${new Date().toISOString()}.
Look for signals published or gaining traction in the last 6 hours.`,
      },
    ],
    tools: trendMonitorToolDefinitions,
  });

  // Tool calls are handled inside the agent loop (not shown for brevity)
  // Signals are written to the backlog via save_content_draft tool
  const signalsFound = parseSignalsFromResponse(response);

  return {
    ...ctx,
    signals: [...ctx.signals, ...signalsFound],
  };
}

// Stage 2: Content Engine — draft posts for top signals
async function runContentEngine(ctx: PipelineContext): Promise<PipelineContext> {
  // Only process high-priority signals (combined score > 24)
  const prioritySignals = ctx.signals
    .filter((s) => s.noveltyScore + s.relevanceScore + s.velocityScore > 24)
    .slice(0, 3); // Max 3 drafts per run

  const drafts: ContentDraft[] = [];

  for (const signal of prioritySignals) {
    const response = await client.messages.create({
      model: "claude-opus-4-6",
      max_tokens: 2048,
      system: contentEngineSystemPrompt,
      messages: [
        {
          role: "user",
          content: JSON.stringify({
            topic: signal.topic,
            angle: signal.angle,
            source_links: signal.sourceLinks,
            priority: signal.velocityScore > 8 ? "urgent" : "normal",
          }),
        },
      ],
      tools: contentEngineToolDefinitions,
    });

    const draft = parseDraftFromResponse(response);
    if (draft) drafts.push(draft);
  }

  return { ...ctx, drafts: [...ctx.drafts, ...drafts] };
}

// Stage 3: Quality Control — fact-check and originality check before queue
async function runQualityControl(ctx: PipelineContext): Promise<PipelineContext> {
  const checkedDrafts: ContentDraft[] = [];

  for (const draft of ctx.drafts.filter((d) => d.status === "pending_review")) {
    const response = await client.messages.create({
      model: "claude-opus-4-6",
      max_tokens: 1024,
      system: `You are a content quality reviewer.

Check the draft against these criteria:
1. FACTUAL: Are all specific claims (numbers, dates, attributions) sourced or clearly marked as estimates?
2. TONE: Does it match the brand voice rules? Flag any hype language or passive constructions.
3. ORIGINALITY: Is this substantially different from generic takes on this topic?
4. LENGTH: Is it 150-300 words?

Return JSON: {
  approved: boolean,
  issues: string[],  // empty if approved
  revised_body: string | null  // provide corrected version if fixable
}`,
      messages: [
        {
          role: "user",
          content: `Review this draft:\n\n${draft.body}`,
        },
      ],
    });

    const review = JSON.parse(extractText(response));

    if (review.approved) {
      checkedDrafts.push({ ...draft, status: "approved" });
    } else if (review.revised_body) {
      // Auto-fix minor issues, mark approved
      checkedDrafts.push({
        ...draft,
        body: review.revised_body,
        status: "approved",
      });
    } else {
      // Flag for human review — too many issues to auto-fix
      checkedDrafts.push({ ...draft, status: "pending_review" });
      ctx.errors.push(`Draft ${draft.id} failed QC: ${review.issues.join(", ")}`);
    }
  }

  return {
    ...ctx,
    drafts: [
      ...ctx.drafts.filter((d) => d.status !== "pending_review"),
      ...checkedDrafts,
    ],
  };
}

// Stage 4: Publishing — post approved drafts with rate limit awareness
async function runPublishingAgent(ctx: PipelineContext): Promise<PipelineContext> {
  const approvedDrafts = ctx.drafts.filter((d) => d.status === "approved");

  const response = await client.messages.create({
    model: "claude-opus-4-6",
    max_tokens: 2048,
    system: publishingAgentSystemPrompt,
    messages: [
      {
        role: "user",
        content: `You have ${approvedDrafts.length} approved drafts to publish.
Check rate limits and timing, then publish what you can.
Queue the rest with estimated publish time.

Drafts: ${JSON.stringify(approvedDrafts)}`,
      },
    ],
    tools: publishingAgentToolDefinitions,
  });

  const publishResults = parsePublishResultsFromResponse(response);

  return {
    ...ctx,
    publishedCount: ctx.publishedCount + publishResults.published,
    drafts: applyPublishResults(ctx.drafts, publishResults),
  };
}

// Orchestrate the full pipeline
async function runContentPipeline(): Promise<PipelineContext> {
  console.log(`[content-pipeline] Starting run at ${new Date().toISOString()}`);

  let ctx: PipelineContext = {
    runId: crypto.randomUUID(),
    startedAt: Date.now(),
    signals: [],
    drafts: [],
    publishedCount: 0,
    errors: [],
  };

  ctx = await runTrendMonitor(ctx);
  console.log(`[content-pipeline] Signals found: ${ctx.signals.length}`);

  ctx = await runContentEngine(ctx);
  console.log(`[content-pipeline] Drafts created: ${ctx.drafts.length}`);

  ctx = await runQualityControl(ctx);
  const approved = ctx.drafts.filter((d) => d.status === "approved").length;
  console.log(`[content-pipeline] Approved drafts: ${approved}`);

  ctx = await runPublishingAgent(ctx);
  console.log(`[content-pipeline] Published: ${ctx.publishedCount}`);

  if (ctx.errors.length > 0) {
    console.warn(`[content-pipeline] Errors:`, ctx.errors);
  }

  return ctx;
}

The quality control stage in the middle is the gate that keeps volume from destroying quality. Without it, you have a fast way to produce mediocre content at scale. With it, you have a system that enforces your standards automatically and surfaces the drafts that need human attention.


Content Quality Control

There are three categories of quality failure in automated content systems, and each requires a different defense.

Factual errors happen when an agent invents statistics or misattributes claims. The defense is source-grounding: the Content Engine only draws from the source links provided by the Trend Monitor. It cannot fabricate a statistic if the system prompt prohibits adding facts not present in the provided sources. The quality control step checks for unverifiable specific claims and flags them before they reach the publishing queue.

Tone drift happens when the agent defaults to generic AI-assistant language instead of your brand voice. The defense is specificity in your exclusion list. "Do not use hype language" does not work. "Do not use the words revolutionize, game-changing, powerful, robust, or seamless" works. The difference is that specific word prohibitions can be mechanically checked — by the agent itself in the quality control step.

Originality failures happen when the agent produces the common take on a trending topic instead of the specific angle you assigned. The defense is the angle field in the content task. The Trend Monitor does not just flag a topic — it specifies the angle. The Content Engine's system prompt instructs it to use that angle rather than defaulting to the most obvious framing.


How We Use This at The AI University

Our Trend Monitor runs at 06:00, 10:00, 14:00, and 18:00 UTC. It scans for AI tooling signals, agent framework releases, developer workflow changes, and university/education topics relevant to our audience.

The strongest signals — typically one or two per day — flow into the Content Engine within the same run. Drafts are written, quality-checked automatically, and placed in the approved queue. The Publishing Agent checks the queue at each run and publishes when the timing window and rate limit conditions are met.

The Competitive Watch agent runs on Sundays and seeds the backlog with gap-based topic recommendations for the coming week. These tend to produce our highest-engagement posts because we are covering terrain that nobody else has addressed yet.

In the last 90 days running this system:

  • Average posts published per week: 9 (up from 3 manual)
  • Time spent on content operations per week: 2 hours (down from 11)
  • Average engagement rate: 4.1% (up from 2.8% manual)
  • Zero factual corrections issued

The engagement rate improvement is not from volume. Posting three times more often does not automatically triple your engagement. It comes from the Trend Monitor catching signals 12-18 hours before the mainstream AI newsletter cycle, which means our audience sees our take first.


Metrics That Matter

Track these four numbers to evaluate your content operations system:

Signal-to-publish rate: Of all signals the Trend Monitor flags, what percentage end up as published posts? If this is too high (above 60%), your thresholds are too loose and you are publishing too much. If it is too low (below 15%), your scoring is too conservative.

Time from signal to published post: In our system this runs 4-8 hours depending on timing windows. The goal is same-day publishing for high-velocity signals. If your time-to-publish is consistently above 24 hours, a bottleneck is somewhere in the pipeline.

QC pass rate: What percentage of drafts pass quality control without human intervention? Below 60% means your Content Engine's prompt needs work. Above 95% means your QC threshold may be too loose — some bad posts are getting through automatically.

Engagement per post: Track this per agent source (Trend Monitor-sourced vs. Competitive Watch-sourced vs. manual backlog). The breakdown tells you which signal source is producing your best content.


Key Takeaways

Content operations agents work because content work is genuinely repetitive and latency-sensitive. The mechanical parts — monitoring dozens of sources, pulling research, formatting to spec, checking timing, managing rate limits — are exactly the tasks where agents outperform humans. The judgment parts — what your brand stands for, what angle serves your audience, what is actually worth publishing — belong to you.

Build the Trend Monitor first and run it for two weeks before touching the rest of the pipeline. You need to understand what it surfaces and how good your scoring is before you start automating the downstream steps. A Content Engine fed bad signals will produce a lot of content quickly that nobody cares about.

Encode brand voice as specific prohibitions, not abstract descriptions. Specific rules are mechanically enforceable. Abstract guidelines are not.

Treat rate limits as hard constraints at the architecture level, not soft suggestions in the publishing step. LinkedIn throttling recovers, but the reach penalty from aggressive posting lasts longer than most people expect.

The quality control step is not optional. At any volume above one post per day, you need an automated gate that catches tone drift, unverifiable claims, and originality failures before they reach your audience. The cost of one bad post from an automated system is higher than the cost of a slightly slower publishing rate.

Start with three agents — Monitor, Content Engine, Publishing — and add Competitive Watch after the core pipeline is stable. Four agents running reliably is better than six agents running inconsistently.