Sales Automation Agents
A practical walkthrough of the four core agents in an AI-powered sales pipeline: Prospector, Outreach, Reply Handler, and Onboarding. Covers tool requirements, a lead scoring TypeScript implementation, real-world metrics, common mistakes, and the human-in-the-loop decisions you cannot afford to automate.
Sales Automation Agents
Sales is the domain where most teams reach for AI automation first and get burned fastest. Not because agents cannot do the work — they absolutely can — but because the mistakes are expensive. A bad customer support reply costs you one unhappy user. A bad outreach sequence costs you a burnt lead list and a reputation hit with everyone on it.
This guide covers how to build a sales automation system that actually works in production. We run a 15-agent system at The AI University that includes outreach, reply handling, and onboarding agents. What follows is the architecture, the code, and the hard lessons about what to automate and what to leave to humans.
The Pipeline Agents Handle
Sales is a sequence of gates. A prospect moves from unknown to aware to interested to qualified to customer. Each gate requires different work and different judgment. Here is how the pipeline maps to agents:
[Raw Lead Data]
|
v
[1. Prospector Agent]
- Finds leads matching ICP
- Enriches with LinkedIn, web data
- Scores and prioritizes
|
v
[2. Outreach Agent]
- Writes personalized cold emails
- Manages send timing and cadence
- Saves outreach records to CRM
|
v
[3. Reply Handler Agent]
- Monitors inbox for replies
- Classifies intent
- Drafts appropriate responses
|
v
[4. Onboarding Agent]
- Triggers welcome sequences
- Recommends learning paths
- Tracks activation milestones
|
v
[Active Customer]
Each agent in this pipeline has a narrow job. None of them try to handle the full sales motion. That narrowness is what makes them reliable.
Agent 1: The Prospector
The Prospector's job is to find people who match your ideal customer profile, gather enough context about them to make outreach worthwhile, and produce a scored list that the Outreach agent can work from.
What it does
- Accepts a target segment definition — company size, industry, title, signals (recently funded, hiring for a specific role, posted about a relevant pain point)
- Searches LinkedIn, company websites, and industry sources to find matching individuals
- Enriches each prospect with public data: company size, funding stage, tech stack, recent news, the person's stated responsibilities
- Scores each prospect against the ICP and attaches reasoning
- Creates a prospect record in your CRM or data store
Tools it needs
const prospectorTools = [
"search_web", // general web search for company and person data
"search_linkedin", // LinkedIn profile and company page lookup
"enrich_lead", // third-party enrichment (Clearbit, Apollo, etc.)
"get_company_news", // recent press, funding rounds, job postings
"save_prospect", // write the enriched record to your CRM
"check_crm", // verify the lead does not already exist
];
Lead scoring implementation
Lead scoring is where most teams get vague. They ask the agent to "score the lead" with no scoring logic, and the results are inconsistent. The right approach is to define the scoring logic explicitly in code and let the agent handle the data gathering — not the scoring arithmetic.
// lead-scoring.ts
interface LeadSignals {
companySize: number; // number of employees
annualRevenue: number; // estimated ARR in USD
hasRaisedFunding: boolean;
monthsSinceLastFunding: number;
titleSeniority: "ic" | "manager" | "director" | "vp" | "c-suite";
technographics: string[]; // tools and platforms they use
recentSignals: string[]; // job postings, news, LinkedIn activity
engagementHistory: {
visitedSite: boolean;
downloadedContent: boolean;
attendedWebinar: boolean;
};
}
interface LeadScore {
total: number; // 0 to 100
tier: "A" | "B" | "C" | "D";
breakdown: Record<string, number>;
disqualified: boolean;
disqualifyReason?: string;
}
function scoreLeadFirmographics(signals: LeadSignals): number {
let score = 0;
// Company size sweet spot: 20-500 employees
if (signals.companySize >= 20 && signals.companySize <= 100) score += 20;
else if (signals.companySize > 100 && signals.companySize <= 500) score += 15;
else if (signals.companySize > 500 && signals.companySize <= 2000) score += 8;
// Under 20 or over 2000 gets 0 — not our segment
// Revenue signal
if (signals.annualRevenue >= 1_000_000 && signals.annualRevenue <= 10_000_000) score += 15;
else if (signals.annualRevenue > 10_000_000) score += 10;
// Recent funding is a strong buying signal
if (signals.hasRaisedFunding && signals.monthsSinceLastFunding <= 12) score += 20;
else if (signals.hasRaisedFunding && signals.monthsSinceLastFunding <= 24) score += 10;
return score;
}
function scoreLeadBehavior(signals: LeadSignals): number {
let score = 0;
// Title seniority — we need someone with budget authority
const seniorityScores: Record<string, number> = {
"c-suite": 20,
"vp": 18,
"director": 15,
"manager": 8,
"ic": 2,
};
score += seniorityScores[signals.titleSeniority] ?? 0;
// Intent signals from engagement history
if (signals.engagementHistory.visitedSite) score += 5;
if (signals.engagementHistory.downloadedContent) score += 8;
if (signals.engagementHistory.attendedWebinar) score += 12;
return score;
}
function scoreLead(signals: LeadSignals): LeadScore {
// Hard disqualification rules — check these before scoring
if (signals.companySize < 10) {
return {
total: 0,
tier: "D",
breakdown: {},
disqualified: true,
disqualifyReason: "Company too small (under 10 employees)",
};
}
const firmographicsScore = scoreLeadFirmographics(signals);
const behaviorScore = scoreLeadBehavior(signals);
const total = Math.min(100, firmographicsScore + behaviorScore);
const tier: LeadScore["tier"] =
total >= 75 ? "A" :
total >= 55 ? "B" :
total >= 35 ? "C" : "D";
return {
total,
tier,
breakdown: {
firmographics: firmographicsScore,
behavior: behaviorScore,
},
disqualified: false,
};
}
The agent calls enrich_lead and search_web to gather the raw signals, then your deterministic scoreLead function produces a consistent score. Do not ask the LLM to do the arithmetic — it will drift. Let it do what it is good at: reading unstructured data and extracting signals.
Agent 2: The Outreach Agent
The Outreach agent takes a prioritized prospect list and writes personalized cold emails. Its quality is directly proportional to how much research the Prospector did. Garbage in, generic emails out.
What it does
- Pulls the top-tier prospects from the queue (typically A and B scores)
- Reads the enrichment data the Prospector gathered
- Identifies the most relevant angle — a recent company announcement, a shared connection, a pain point evident from their job postings
- Writes a personalized first email, a follow-up, and a final break-up email
- Schedules the sequence based on optimal send timing
- Saves each outreach record so the Reply Handler can match replies to the right sequence
Tools it needs
const outreachTools = [
"get_prospect_record", // pull enriched lead data from CRM
"get_send_timing", // optimal send window for this contact's timezone
"write_email_draft", // internal tool — saves draft for human review
"send_email", // fires the email via your sending infrastructure
"save_outreach_record", // logs what was sent, when, and to whom
"schedule_followup", // queues the follow-up in the sequence scheduler
];
Send timing matters
Most teams send all their outreach at once, or on whatever schedule is convenient for them. The data is clear: Tuesday through Thursday, 8-10 AM or 3-5 PM in the recipient's local time, outperforms other windows by a meaningful margin. The agent should check the prospect's timezone from their LinkedIn location data and schedule accordingly, not batch-fire at 9 AM EST regardless of where the recipient is.
Personalization floor
Set a minimum personalization requirement in the system prompt. Every email the Outreach agent sends must reference at least one specific, verifiable fact about the prospect or their company — not generic flattery. If the Prospector did not return enough data to meet this floor, the agent should flag the lead for more research rather than send a weak email.
Agent 3: The Reply Handler
The Reply Handler monitors your sales inbox and handles incoming responses. It is the most consequential agent in the pipeline because it operates on warm leads — people who have already engaged. Getting this wrong is more damaging than a bad cold email.
What it does
- Polls the inbox on a defined interval (or triggers on a webhook from your email provider)
- Matches each reply to the original outreach sequence and prospect record
- Classifies the reply intent: interested, not interested, objection, question, out-of-office, referral to someone else
- Drafts an appropriate response for each classification
- For interested replies: flags for immediate human review, does not auto-send
- For clear rejections: logs the outcome and removes from sequence
- For questions and objections: drafts a response for human review with suggested talking points
Intent classification
Classification needs to handle the full range of real replies, including ambiguous ones. A reply that says "send me more info" is not the same as "I am ready to buy" — but both are interested. A reply that says "not the right time" might mean now but not never. Train your agent to preserve that nuance rather than collapse everything into binary interested/not-interested.
type ReplyIntent =
| "interested_hot" // explicit buying signal, wants next step
| "interested_warm" // curious, wants more information
| "objection_price" // price concern, not a rejection
| "objection_timing" // not now, revisit later
| "objection_fit" // questions whether this is right for them
| "not_interested" // clear rejection
| "out_of_office" // automated OOO, re-queue for their return
| "referral" // pointing to a better contact at the company
| "unsubscribe"; // hard stop, remove from all sequences
The human-review gate for warm leads
Do not auto-send responses to interested prospects. This is the most important human-in-the-loop decision in the entire pipeline. When someone expresses genuine interest, the stakes are high enough that a human should review the draft before it goes out. The agent's job is to draft a strong response fast — ideally in seconds — so the human can review and send without delay, not to replace the human judgment entirely.
Agent 4: The Onboarding Agent
The Onboarding agent activates new customers. It runs when a deal closes and handles the first 30 days of the customer relationship — the period that determines long-term retention.
What it does
- Triggers when a deal is marked closed-won in the CRM
- Sends a personalized welcome message referencing what the customer said they wanted to achieve
- Assesses the customer's background from the prospect record (are they technical? what's their experience level? what's their primary use case?)
- Recommends a specific learning path rather than dumping a generic "getting started" link
- Schedules milestone check-ins at day 3, day 7, and day 30
- Monitors early activation signals and adjusts the sequence if someone is ahead of or behind expected progress
Tools it needs
const onboardingTools = [
"get_customer_record", // full CRM record including deal notes
"get_product_usage_data", // what features have they touched so far
"send_email", // welcome and follow-up messages
"create_learning_path", // generate a personalized curriculum
"schedule_checkin", // queue milestone messages
"log_onboarding_event", // track onboarding progress for reporting
"flag_at_risk_customer", // alert human CSM if activation is stalling
];
Real-World Performance Metrics
These numbers come from teams running agent-assisted sales pipelines, not from vendor marketing materials.
Prospector performance: Teams running automated prospecting report 2 to 3 times the lead throughput compared to manual research, with comparable or better lead quality when the ICP definition is tight. The agent does not get tired at the 50th prospect the way a human does.
Outreach conversion: Personalized, research-backed cold email written by agents with good enrichment data consistently achieves 15-25% reply rates on targeted lists. The key variable is enrichment depth — emails written without enrichment data fall to 3-7% reply rates, which is worse than templated sequences run by experienced SDRs.
Reply handling speed: Agents respond to incoming replies in seconds rather than hours. For interested prospects who replied during off-hours, this is a meaningful competitive advantage. Response time is a significant factor in conversion, and most sales teams cannot cover it around the clock.
Onboarding activation: Personalized learning paths with milestone check-ins improve 30-day activation rates by 20-35% compared to generic onboarding sequences, primarily because they reduce the time-to-first-value by eliminating irrelevant content.
Common Mistakes
Over-automating and losing the personal touch
The mistake is treating the agent as a replacement for human judgment rather than a multiplier of human capacity. A fully automated pipeline — where no human ever reviews a reply or makes a judgment call — will eventually send something wrong at the worst possible moment. One bad email to a high-value prospect can close the door permanently. Use agents to do the volume work and to surface the right things at the right time to humans, not to remove humans from consequential decisions.
Skipping lead enrichment before outreach
Sending email without enrichment data is the most common and most costly mistake in agent-driven outreach. The agent cannot write a genuinely personalized email from a name, title, and company alone. You need company news, the person's stated priorities, their tech stack, recent hiring signals — something specific to anchor the message. Teams that skip enrichment to save cost spend that savings on lead list churn when reply rates collapse.
Sending too many emails per contact
Automated sequencing makes it easy to load up a cadence with five, six, seven touchpoints. The data does not support that. Three well-spaced, well-written emails outperform five mediocre ones. The third email in a sequence should be a clean break-up message that leaves the door open — not another nudge. Prospect patience for automation sequences has declined as AI outreach has proliferated. Shorter, better sequences are the correct response.
Not tracking sequence state per contact
If a prospect replies to email two, the system must stop email three from going out. This sounds obvious but requires explicit sequence state management. Every outreach record needs to include which emails have been sent, which have been opened, whether a reply has been received, and what the current sequence status is. Agents without access to this state will send follow-ups after a prospect has already engaged and the relationship will start with a bad signal.
Human-in-the-Loop: What to Automate vs. What Needs Human Judgment
This is the most important decision you will make when designing a sales agent system. Get it wrong in either direction and you pay for it.
Automate fully:
- Lead research and enrichment
- ICP matching and lead scoring
- Scheduling and send timing optimization
- Logging and CRM updates
- Sequence state management
- Out-of-office detection and re-queuing
- Clear rejection handling (suppress from sequence, log outcome)
- Initial onboarding email delivery
Automate with human review gate:
- First cold email drafts (review the first batch per campaign, spot-check ongoing)
- Replies to interested prospects (agent drafts, human reviews and sends)
- Replies to objections (agent drafts with suggested talking points, human sends)
- Learning path recommendations for onboarding (agent generates, human approves or adjusts)
Keep fully human:
- Defining and updating the ICP
- Approving the outreach angle and messaging strategy
- Initial call or meeting with a qualified prospect
- Pricing and commercial negotiations
- Any communication with a prospect who has expressed frustration
- Decisions about when to write off a lead permanently
The pattern is consistent: automate research, scheduling, drafting, and logging. Keep humans on the send decision for warm leads, and keep humans entirely in charge of strategy and relationship-critical moments.
Key Takeaways
Build four specialized agents, not one general sales agent. The Prospector, Outreach, Reply Handler, and Onboarding agents each have a narrow job and specific tools. Combining them into a single agent creates a system that is hard to debug, hard to improve, and prone to doing too much in a single context window.
Lead scoring belongs in your code, not in the LLM. Define explicit scoring logic with thresholds and disqualification rules. Use the agent to gather signals, and use deterministic functions to turn those signals into scores. This gives you consistent, auditable results that you can tune without rewriting prompts.
Enrichment is not optional. The quality of your outreach is bounded by the quality of your prospect data. Skipping enrichment to save cost will cost you more in burned leads and collapsed reply rates than the enrichment ever would have.
Set hard human-review gates for warm leads. Interested prospects deserve a human in the loop. The agent's job at this stage is to draft fast and surface context — not to close the deal autonomously. Build the gate into the architecture, not as an afterthought.
Track sequence state per contact. Every prospect needs an explicit record of where they are in the sequence and what has happened. Agents without this context will fire follow-ups at prospects who have already replied, which is the fastest way to destroy a relationship before it starts.
Measure what matters. Reply rate, meeting booked rate, and onboarding activation rate are the metrics that tell you whether the system is working. Open rates are a vanity metric in this context. If your reply rates are below 10%, the problem is almost always enrichment depth or ICP fit — fix those before adjusting the agent's writing.
The teams that get the most out of sales automation agents are not the ones who automate the most. They are the ones who are most precise about where automation adds value and where human judgment is irreplaceable. Build with that clarity and the system will compound over time.