← Resources Home

Case Notes

These are real implementation stories from service businesses that added AI agents to their operations. No vanity metrics — just what was broken, what was built, and what changed.

Case 1: Marketing Agency — Lead Response Time

Lead Gen4-Person Agency

The situation: A boutique marketing agency was getting 15–25 inbound leads per month from their website and referral partners. The founder handled all initial replies personally. Average response time: 8–14 hours. On busy days, some leads got no reply for 24+ hours.

What was built

BeforeAvg response: 11 hours
Leads that ghosted: ~40%
Founder time on triage: 6 hrs/week
Booked discovery calls: 4/month
AfterAvg response: 1.8 minutes
Leads that ghosted: ~18%
Founder time on triage: 1.5 hrs/week
Booked discovery calls: 9/month

Key insight: The agent didn't close deals. The founder still did the selling. But by the time the founder called, the lead already felt heard and was more engaged. Speed created trust before the first real conversation.

Related: Sub-5-Minute Lead Response Playbook →

Case 2: IT Services Firm — Client Onboarding Chaos

Onboarding12-Person Firm

The situation: An IT services company onboarded 3–5 new clients per month. Every onboarding was ad-hoc: different team members asked for different info, kickoff calls ran long, and clients regularly had to re-send credentials or repeat requirements. Average time from contract to "actually working": 12 days.

What was built

BeforeContract → working: 12 days avg
Kickoff call length: 60–90 min
Missing info requests post-kickoff: 3–5 per client
Client satisfaction (NPS): 32
AfterContract → working: 4 days avg
Kickoff call length: 25 min
Missing info requests post-kickoff: 0–1 per client
Client satisfaction (NPS): 61

Key insight: The biggest win wasn't speed — it was consistency. Every client got the same professional experience regardless of which team member handled it. The brief became the single source of truth.

Related: Project Brief Template →

Case 3: Consulting Practice — Invisible Operations

Ops ReviewSolo Consultant + 2 Contractors

The situation: A solo consultant with two contractors had no operational visibility. Revenue was "probably okay," pipeline was "a few things cooking," and project status lived in everyone's heads. The consultant avoided doing a weekly review because "it takes too long and I never know what to look at."

What was built

BeforeWeekly review: skipped most weeks
Overdue invoices discovered: 30+ days late
Stuck projects identified: when client complained
Founder decision speed: reactive only
AfterWeekly review: every Monday, 10 min
Overdue invoices discovered: within 7 days
Stuck projects identified: same week
Founder decision speed: proactive, weekly

Key insight: The consultant didn't need more data — they needed less. The agent's job was to compress everything into "here's what matters this week" and make it so easy to review that skipping it felt harder than doing it.

Related: Weekly Ops Review Playbook →

Case 4: Web Development Shop — Scope Creep Margin Erosion

Scope Management6-Person Agency

The situation: A web dev shop consistently delivered projects 25–40% over the original hours estimate. The team said yes to small requests because "it's just 20 minutes." Across 8 active projects, those 20-minute requests added up to 30+ untracked hours per month — roughly $6,000 in lost margin.

What was built

BeforeProjects over scope: 80%
Untracked hours/month: 30+
Margin erosion: ~$6K/month
Scope conversations: awkward, avoided
AfterProjects over scope: 25%
Untracked hours/month: 5–8
Additional revenue captured: ~$4K/month
Scope conversations: routine, professional

Key insight: The agent didn't say "no" to clients — it made saying "yes, and here's the cost" effortless. Most clients respected the process. Some even appreciated the transparency because it meant the original deliverables stayed on track.

Related: Scope Creep Prevention Playbook → · Change Request Template →

Next step

Use these examples as a benchmark: identify one bottleneck, implement one fix, and track before/after metrics for 30 days.

Choose a playbook →