Case Study

When iteration speed changes what you can attempt

March 2026 10 min read

We talked to 70 potential customers in six weeks. Two signed enterprise pilots. Then we almost lost them because we couldn't build fast enough.

This isn't a story about tools. It's about what happens when the loop between "customer tells you something" and "customer sees the result" collapses from weeks to days. At Lexacon.AI, that compression didn't just make us faster. It changed our product strategy, our business model, and what we thought was possible with a two-person technical team.

I'm writing this on the Black Hills Labs blog because Lexacon is the proof behind everything I advise clients on. When I tell a startup they can build production AI systems with a small team, this is the case study. Not theory. The actual build.

In this article:

The gap that almost killed our pilots

We started Lexacon through the Antler Entrepreneur-in-Residence program in April 2025. The first six weeks were pure customer discovery: 70 interviews, rapid prototyping between meetings. We'd talk to three companies on Monday, rebuild the demo Tuesday, show a different version Wednesday. By the end, 15 companies wanted to keep talking and two signed enterprise pilot agreements.

Then we had to actually build the thing.

Lexacon is an AI document intelligence platform for construction project commercial managers and contract administrators. The kind of software where someone dumps 5,000 documents from Dropbox or SharePoint and expects the system to ingest them all, extract the text (OCR for scanned PDFs, parsing for Word and Excel, processing for images), chunk everything, pull out key data points, and then answer questions about construction contracts, payment certificates, and subcontractor agreements accurately and fast. Enterprise-grade, deployed in private cloud environments, passing security assessments.

During pilots, we worked closely with our customers to prioritize features. Weekly check-ins. Direct feedback loops. We were building exactly what they asked for. The process was right. The constraint was speed.

With 1.5 developers using Cursor-assisted coding, real features took full two-week sprints. Sometimes longer. We were learning production architecture patterns for the first time: chunking strategies, retrieval pipelines, multi-tenancy. Six months of work produced roughly one-third of what we needed.

The problem wasn't quality. It was the gap between conversation and delivery. Our customers would tell us on Monday what they needed. Two weeks later, we'd show them a working version. By then, they'd moved on to a different problem. Or they'd refined their thinking and the feature we built wasn't quite what they meant anymore.

We could show front-end demos quickly using Lovable. But demos aren't features. Our customers needed to run real documents through real pipelines and get real answers. That required backend work, and the backend was where our speed bottleneck lived.

I could feel the engagement cooling. Fewer messages between check-ins. Shorter meetings. Their team stopped sending us test documents. The pilots weren't dying. But the window was narrowing.

How we got fast enough to survive

In October 2025, we switched to agentic development with Claude. Getting productive was fast. Building usable features during the pilots was genuinely smooth. But I want to be honest about what was hard, because the difficulty wasn't where I expected it.

The tool wasn't the bottleneck. I was.

I'm not a software engineer by training. I've worked with hundreds of software engineers, have been a business analyst, project manager, change manager, tester. I've personally built data pipelines before used to deliver hundreds of millions of dollars in savings, but I'd never built a full production backend with retrieval across tens of thousands of documents. I knew what robust CI/CD looks like for a complex application on a diagram but not how it gets built and troubleshooted. I was simultaneously product owner, designer, backend developer, and tester. Claude could generate code fast, but I had to learn what questions to ask, what "production quality" actually meant for enterprise software, and how to specify what I wanted precisely enough for the output to be deployable.

The approach that made it work was spec-first development. We started every feature with the business goal: what does this need to achieve for the customer? That goal shaped the specification. The specification guided Claude's implementation and provided robust test cases for us to know it was working. When the spec was clear, the output was strong. When I was vague, the output reflected that. The discipline was in the planning, not the coding.

The ingestion pipeline is where the iteration was most brutal. Getting the system to automatically process 5,000 mixed-format documents (text PDFs, scanned PDFs needing OCR, Word documents, Excel spreadsheets, images), extract meaningful data, and then provide accurate, non-hallucinated answers about construction administration topics through our retrieval and synthesis agent? That took enormous iteration. How hard can it be to get the exact payment amount for the latest payment certificate? Well not as easy as you think. Not because the tool couldn't do it, but because the trade offs between speed, quality, complexity were very real and I was learning how to resolve them while building it.

But that's also the proof: someone who had never built a retrieval system before was able to build one that processes 20,000+ documents, handles 80,000+ embedded chunks, connects to Dropbox and SharePoint, and delivers meaningful specialist answers. With a team of two. The learning curve was steep but the ceiling was high.

Once the patterns clicked, the speed was real. We rebuilt the entire six months of previous work in approximately one month, with a better architecture. Then built twice as much on top. The full 12-month roadmap completed in three months.

A document reconciliation pipeline that compares invoiced amounts against contract terms across hundreds of documents? Sprint commitment before. Shipped in two days.

The conversation that changes when you ship daily

The first thing that changed was the weekly check-in dynamic.

Instead of showing progress toward a feature, we showed the finished feature. Sometimes two or three of them. Our pilot customer's team went from reviewing our roadmap to actually using the software. They started finding edge cases, which meant they were testing it against real work. That's a fundamentally different kind of feedback than "looks good, ship it."

The second thing that changed was our ability to respond to what we learned. When a customer said "this workflow doesn't quite work for how we actually process payment certificates," we didn't add it to the backlog. We rebuilt the workflow and showed them the next version the following day.

That responsiveness is what kept the pilots alive. Not the raw speed. The fact that the loop between feedback and result collapsed from weeks to days. The engagement signals reversed: longer meetings, more messages between check-ins, their team sending us document sets to process unprompted.

There's something that happens when customers see you shipping real features that fast. They stop thinking about your roadmap and start thinking about what's possible beyond it. That shift in conversation quality turned out to be more valuable than the speed itself.

The feature our customer invented

One of our pilot customers had been watching us iterate for a few weeks. Their project manager said something in a weekly call that changed everything: "Can we just connect to your system directly? We don't want to wait for you to build screens for every analysis we need. We want to query our documents ourselves."

What they were describing was direct API access to our AI stack. We built it as MCP (Model Context Protocol) integration, so their team could use their own AI assistants to interact with Lexacon's document processing, search, and analysis capabilities directly. Ingest documents, classify them, extract key data, run custom analysis, all without waiting for us to build a UI for each use case.

That request only happened because they'd seen us deliver fast enough to believe we could build it. If we'd been shipping features every two weeks, the customer stays in "review the roadmap" mode. They don't start imagining what's possible beyond the roadmap because the roadmap itself feels like it's moving slowly.

The insight wasn't technical. It was strategic. Our customer didn't want a tool with a fixed set of features. They wanted infrastructure they could build on top of. Speed didn't just compress our timelines. It changed the quality of the conversation, and that conversation produced the feature that now defines our platform.

When speed changes strategy

The MCP integration led to a bigger shift in how we thought about the business.

Before, our model was straightforward enterprise SaaS: sign pilots, prove value, expand seats, hire engineers, build more features, repeat. We needed each pilot's revenue to fund the team growth to service the next customer. That's a viable path, but it's constrained. Every new customer requires proportional team growth.

After MCP, the equation changed. If customers could access our AI stack directly, they didn't need us to build every feature. They could create their own analyses, reports, and workflows using our document processing infrastructure. That meant a small team could maintain and improve the core platform while customers built on top of it.

That realization is what gave us the confidence to open Lexacon to smaller teams and individual users (deploying this week!). Not because we lowered the product quality, but because the architecture made it feasible to serve more customers without proportionally scaling the team.

None of this was planned. We didn't sit down and say "let's use faster tools so we can change our business model." We were just trying to keep two pilot customers engaged. The speed let us survive the pilots. The MCP conversation happened because the speed changed what our customer was willing to ask for. And the business model shift followed from what MCP made possible.

Looking back, the customer interviews told us what to build. Working with our pilots told us how to build it. The tooling determined how fast we could close that loop. And closing it faster changed everything downstream.

The frameworks for building good products haven't changed. What's changed is how quickly you can move through them. The teams that start building that capability now are going to compound in ways that will be hard to catch later, because this isn't just about shipping faster. It's about the conversations, insights, and strategic options that only become available at a certain speed.


Alex Monegro is co-founder of Lexacon.AI and founder of Black Hills Labs, where he runs focused sprints helping startups and SMBs build AI and automation systems, starting at $3,000. If your team is hitting a similar speed wall, that's the problem he solves.

Want to know where you stand?

Take our AI readiness quiz. 10 questions, 2 minutes. You'll get a personalized recommendation for where to start with automation.