Building with Claude #5

The pipeline that runs itself: how we track prospects on a $5/month server

February 2026 12 min read

We sent 25 outreach emails to 16 companies. 11 opened them. One replied.

The emails were getting delivered. People were opening them. For the first two weeks, nobody responded. We had a pipeline problem and an outreach quality problem, and we didn't have the tools to tell which was which.

So we built the tools. And then the tools built us our first lead. Here's the system that now runs our entire sales operation, what it costs, and what it taught us about why cold emails actually fail.

In this article:

What the system looks like

The whole thing runs on one server. $4.35/month. If you're not technical, here's the short version: five programs running on a single cheap computer in a data center, talking to each other. No special hardware. No IT team. The diagram below shows how it all connects.

Outbound engine
We find them
Research → Enrich contacts → Draft emails → Send
Track opens/clicks → Follow-ups → Classify replies
Hunter / Lusha OpenRouter Brevo
Inbound engine
They find us
Website quiz / Booking form / LinkedIn
Score → Route → Auto-reply → Nurture sequences
OpenRouter Brevo
NocoDB (CRM)
8 tables: companies, contacts, pipeline, interactions, scoring
All data lives here. Both engines read and write to the same database.
Orchestrated by n8n (10 workflows) on a single server. $4.35/month.

Two engines, one database. The outbound engine is where we go find potential clients: research companies, find contact info, draft personalized emails, send them, track who opens what, and auto-generate follow-ups. The inbound engine is where potential clients come to us: someone takes our quiz, fills out a booking form, or connects on LinkedIn, and the system scores them, creates a record, and sends an appropriate auto-reply. Both engines use the same email and AI services. Both read and write to the same CRM. The admin tools sit behind a secure connection so only we can access them.

No Salesforce. No HubSpot. No $200/month CRM subscription.

Where the data lives

NocoDB is a self-hosted Airtable alternative. We picked it for three reasons: zero per-seat pricing, SQL access when we need it, and our prospect data stays on our own server.

The database tracks everything:

27 companies researched, with industry, pain points, and why they're a good fit. 36 contacts with verified emails and decision-making authority. 25 active pipeline entries tracking each deal from first research to closed/lost. 69 interactions logging every email, call, and follow-up.

Total pipeline value sitting in the system right now: $98K. Weighted by probability (most of it is early-stage outreach), that's closer to $26K in realistic expected revenue.

The tables are all linked. Click on a contact, see their company's pain points, every email you've sent them, their lead score, and your last interaction. That's what you get from enterprise CRMs. We built it for free.

What runs automatically

Ten automated workflows handle the grunt work. You don't need to understand how they're built to see the value - here's what happens without us touching anything:

Every morning at 7am, a workflow scans Google News for UAE startup and SME stories. An AI model reads each article, extracts the company name and industry, and scores how likely they are to need what we offer. High-scoring companies get added to the pipeline automatically.

When a new company enters the pipeline, another workflow finds decision-maker contacts using Hunter.io (with Lusha as a backup). Verified emails, titles, LinkedIn profiles. No more manually searching for the right person to email.

When it's time to reach out, a workflow drafts personalized emails using the company's specific pain points, the contact's role, and relevant case studies. A separate validation step checks word count, tone, and formatting before any draft is stored. We review before sending.

After emails go out, the system tracks delivery, opens, and clicks through Brevo webhooks. If someone opens your email three times, the system bumps them from "cold" to "warm" in the pipeline.

Every day, a follow-up engine checks for stale conversations: contacts who received an email five or more days ago with no response. It drafts personalized follow-ups that reference the original email, adjusting the angle each time. Third attempt? It writes a polite breakup email.

When someone replies, an AI classifier reads the response and figures out the intent: interested, asking a question, raising an objection, or just an out-of-office. Interested replies trigger an alert. Pipeline stages update automatically.

When someone takes our quiz on the website, the system creates a contact record, scores their AI readiness, and enters them into the pipeline with appropriate follow-up. Same for booking form submissions.

All ten workflows follow the same patterns: credentials are stored securely (not hardcoded), the system talks to the database over an internal private network, and a shared error-checking function tells us when something fails instead of silently dropping records. We learned that last one the hard way.

Why the emails weren't working

This is the part we didn't expect. The infrastructure was fine. Delivery rate: 100%. People were opening them. For the first 9 cold emails: zero replies.

We spent a full session diagnosing this. The answer wasn't in the pipeline. It was in the emails themselves.

We built a persona evaluation system: three synthetic personas (a brand strategist, a startup founder, and an SMB business owner) that read each email draft and score it on seven dimensions. Each persona explains their first three seconds of reading, what kills the email for them, and whether they'd actually reply.

The first evaluation was sobering. Our best email scored 24 out of 35 from the brand strategist. The SMB owner gave most emails 10 out of 35. Only one persona out of three said they'd take any action.

Three problems:

We were asking for too much. "Can we schedule 15 minutes?" sounds reasonable to the sender. To the recipient, it's a stranger asking for a meeting. We switched to interest-based questions: "Is that the kind of bottleneck you're seeing, or am I off base?" That asks for a thought, not a calendar slot.

Links killed trust. Cold email deliverability drops when you include links. Recipients treat linked emails from unknown senders as suspicious. We removed all links from first-touch emails.

No urgency. The emails that scored highest all had a timing hook: a regulatory deadline, a conference coming up, an award just received. Without urgency, even a well-written email gets filed as "interesting, maybe later" and never comes back.

We iterated three times. Each batch was scored by the same three personas on the same rubric:

Batch 1
Batch 2
Batch 3
1/3
personas would act
Avg score: ~22/35
2/3
personas would act
Top email: 34/35
3/3
personas would act
Top email: 34/35
Scored by 3 synthetic personas (Gemini 3 Pro) on 7 email-specific dimensions

The biggest single improvement was a logistics email. It went from 18/35 to 29/35 after we changed three things: pivoted the urgency from a 2027 deadline to an imminent July pilot, replaced a cross-industry case study with one from their own sector, and swapped consulting jargon for plain business language.

Are synthetic personas a substitute for real market feedback? No. But they catch obvious mistakes before you burn real contacts.

Update: the Batch 3 emails worked. We sent them on a Tuesday evening. By Wednesday morning, the reply monitor workflow caught an incoming email, classified it as "interested," auto-upgraded the pipeline entry from "Outreach" to "Discovery Call," set the temperature to "Hot," and alerted us. The CEO of a Dubai events agency wrote back: "Would love to explore further." Twelve hours from cold email to qualified lead, with zero manual intervention until the human step of replying.

That's one data point, not a trend. But the system that flagged it, classified it, and routed it worked exactly as designed. We'll report the full numbers as the sample grows.

The persona that gave us the toughest feedback was the SMB owner: "I move boxes, not patients. Block sender." That's what you get when you send a healthcare email to a logistics company. Targeting matters more than copywriting.

What it costs

Our stack
Server: $4.35/mo
Domain: ~$1/mo
Email (Brevo): $0
CRM (NocoDB): $0
Automation (n8n): $0
AI calls (OpenRouter): $15-30/mo
Contact enrichment: $0 (free tiers)
Total: ~$20-35/month
SaaS equivalent
HubSpot Starter: $20-50/mo
Instantly (email): $30-97/mo
Apollo (prospecting): $49-99/mo
Clay (enrichment): $149-349/mo
Zapier (automation): $20-69/mo
 
 
Total: $268-664/month

The self-hosted stack costs 5-10% of the SaaS equivalent. The trade-off: it took about a week of focused sessions to build, and we handle our own maintenance. In practice, that means checking on the server once a week and updating containers when needed. It's not zero-effort, but it's not a second job either. For a solo consultant or a small team running lean, that trade-off makes sense. And if you don't want to manage it yourself, that's exactly the kind of setup we build for clients.

What we'd change

Start with the evaluation system, not the infrastructure. We built the entire pipeline, sent 9 cold emails, got zero replies, then figured out the emails were the problem. If we'd tested drafts against synthetic personas first, we'd have caught the CTA and urgency issues before sending anything.

Use the right AI model for feedback. We started evaluating emails with GPT-4o, which gave shallow, repetitive commentary. Every email got nearly the same feedback. When we switched to Gemini 3 Pro, the personas came alive: the SMB owner talked about freight paperwork, the startup founder asked for technical specifics. That granularity is what makes the evaluation useful.

Build follow-ups from day one. Our first batch of emails had no automated follow-up. By the time we built the follow-up workflow, the window for timely responses on many contacts had already passed.

Separate the tools from the website. Our CRM runs on the same server as the website. Traffic spikes slow everything down. Not a problem yet at our scale, but it's an architectural weakness we know we'll need to fix.

The pipeline is running. 27 companies tracked, 36 contacts enriched, 25 active deals in progress, and one qualified lead from the first batch of improved emails. The infrastructure costs less than a single SaaS subscription. And the evaluation system that predicted engagement actually predicted engagement.

The system we built to solve our own problem is the same system we help clients build. We packaged the rules, skills, workflow specs, database schema, and deployment config into a sales pipeline starter kit: 20 files that give you the blueprint for everything described in this article. Get the pipeline starter kit (email required, so we can follow up with tips on getting it running).

Want to know where you stand?

Take our AI readiness quiz. 10 questions, 2 minutes. You'll get a personalized recommendation for where to start with automation.