Building with Claude #4

Start with the problem, not the pack: why most Claude Code starter kits miss the point

February 2026 8 min read

A client recently sent me a founder's skill pack he found online. Twelve Claude Code skills covering everything from product management to fundraising to legal compliance. Seventy-six files. Twenty-two thousand lines of frameworks, templates, and best practices.

His message: I haven't had time to dive into how to make my own skills and this caught my eye, could it be useful? I took a deep look at it and here's what I found, and why it changed how I think about Claude Code starter kits.

The TL;DR: it's better to add capabilities as you build instead of trying to get everything you might need at the beginning. Start with a concrete business goal and let your tools (e.g. Claude Code, Codex, etc.) guide you on what you'll need. They can find published skills and frameworks in real time and help you pick the best one for your specific situation. This approach keeps you focused, avoids clogging your context window (developers have measured MCP tools alone consuming 33-41% of available context before work even starts, and skills add to that total when triggered), and burns fewer tokens.

In this article:

The pack: what's actually in there

The skill pack came from a FinTech newsletter creator with over 100,000 subscribers. It's well-made. Each skill has a structured SKILL.md file with trigger descriptions, diagnostic workflows, and reference frameworks. The product skill references Kevin Hale's YC talks. The sales skill covers B2B pipeline management. The business model skill walks through unit economics using April Dunford's positioning methodology. There are many packs like this out there but we'll use this one as a concrete example.

The twelve skills:

  1. Business Model - pricing, unit economics, competitive positioning
  2. Product - PRDs, roadmaps, user story mapping
  3. Sales - B2B pipeline, objection handling, forecasting
  4. Operations - hiring playbooks, OKRs, board management
  5. Marketing & Brand - brand strategy, content planning
  6. Go-to-Market - launch sequencing, channel strategy
  7. Growth & Analytics - metrics dashboards, cohort analysis
  8. Finance & Accounting - financial models, burn rate, P&L templates
  9. Fundraising - pitch deck structure, investor outreach
  10. Customer Success - onboarding, churn reduction, NPS
  11. Legal & Compliance - term sheets, IP protection, employment law
  12. Idea Validation - market sizing, competitor mapping, MVP scoping

Some include Excel and Word templates. The content quality is genuinely good. If you read through all seventy-six files, you'd come away with a solid MBA-level overview of startup operations.

That's the problem.

The problem with comprehensive

I've been tracking what people actually say about these packs online, and the consensus is striking: "Start with nothing, add as you go."

Users report the same pattern:

  • Large packs are impressive when you first download them
  • You scan through, feel good about having them, then close the folder
  • When a real problem comes up three weeks later, you've forgotten what's in there
  • You end up asking Claude Code to help you from scratch anyway
  • The 2-3 skills you do use regularly are ones you built yourself for a specific workflow

There's also a practical cost to loading everything at once. Claude Code's context window is the AI's working memory for your conversation, and every skill, MCP server, rule file, and project configuration competes for that space. (The standard window is 200K tokens, though Anthropic is rolling out extended windows up to 1M tokens in beta. Either way, the principle holds: more loaded context means less room for actual work.) The data we have is primarily about MCP tools, but the dynamics apply broadly. One developer documented their MCP tools consuming 66,000 tokens before the conversation even started - a third of the available window gone before typing a single prompt. Another found 82,000 tokens consumed across 13 servers - 41% of context. Skills behave differently from MCPs: they load progressively, with the full SKILL.md pulled into context only when triggered. But rules, project configuration (CLAUDE.md), and any always-on tools are loaded from the start. Stack twelve skills on top of those, and each one that fires adds to the running total - frameworks you're not using, squeezing the space available for the work you're actually doing.

This isn't theoretical. More tokens consumed means higher API costs, faster context exhaustion, and eventually the model starts dropping earlier parts of your conversation to fit new information. The lean approach: load what you need, when you need it. It's not just tidier. It's cheaper and more effective.

This matches what I've seen with clients. The founder who sent me this pack? He's a GM coordinating between a CEO, multiple contractor teams, and a Trello board. He doesn't need a skill for "board management frameworks." He needs his Trello board to generate a daily status brief that tells him what's overdue and what decisions are stuck.

That's a very different thing.

The skill pack gives you a framework for thinking about operations. What he actually needs is a working automation that pulls live data from his project management tool every morning. No amount of OKR templates gets him there.

What Claude Code can already do on its own

Here's something that gets overlooked in the rush to collect skills: Claude Code is already extraordinarily good at research and synthesis without any skills loaded.

Ask Claude Code to "help me build a pricing model for my SaaS product" with zero skills installed. It will:

  • Ask clarifying questions about your market, costs, and competitors
  • Research current pricing benchmarks for your category
  • Build a spreadsheet model with formulas
  • Walk you through sensitivity analysis
  • Suggest pricing tiers based on your unit economics

The output will be roughly as good as what you'd get from loading the "business model" skill and asking the same question. Because the skill is essentially encoding the same knowledge that the model already has access to, repackaged as a structured prompt.

If you already have a favorite framework, mention it in your prompt and Claude Code will apply it. If you don't know which framework fits your situation, ask it to compare the top approaches and help you pick one. This is actually better than a pre-loaded skill that hardcodes a single methodology, because the right framework depends on your context. A seed-stage startup pricing its first product needs different tools than a Series B company restructuring enterprise tiers. Contingency theory - the idea that there is no single best approach and the right method depends on the situation - has been well-established in management research since the 1960s. The same principle applies to AI-assisted work: the best skill for the job depends on the job.

Skills add value when they encode your specific context: your company's tech stack, your naming conventions, your approval workflows, your database schema. Generic frameworks? The model already knows those.

This is the key insight that most skill pack creators miss. A skill that says "when the user asks about pricing, walk them through value-based pricing methodology" is doing what the model would do anyway. A skill that says "when the user asks about pricing, check our NocoDB table for current customer segments, pull margin data from the finance sheet, and draft a proposal using our standard template with the Cal.com booking link" - that's a skill worth having.

Start with the problem, reverse-engineer the skill

Here's the approach that's worked for us, both internally and with clients:

Step 1: Identify the specific bottleneck. Not "I need better operations." What specific task eats your time? What breaks when you get busy? Where do things fall through cracks?

For the GM who sent me the skill pack, the answer was clear from our discovery call: he spends hours each day chasing updates across email, WhatsApp, and Trello. The CEO changes priorities mid-week. Nobody knows what's actually happening across workstreams.

Step 2: Define the output. What would "solved" look like? Not a framework. A concrete artifact.

For him: a daily email at 8am showing traffic-light status per workstream, overdue items flagged, decisions that need the CEO's input queued up, and today's priorities listed. Something he can forward to the CEO and say "this is where we are."

Step 3: Build backward from there. What data sources feed that output? What integrations are needed? What logic determines "red" vs "green"? What happens when a priority changes?

Now you have the requirements for a skill. Or more precisely, for a system: a skill (the Claude Code prompting layer), plus integrations (Trello MCP), plus automation (n8n workflow for daily email delivery), plus a data layer (where decisions and priorities get tracked).

Step 4: Build the skill around the workflow, not the other way around. Write the SKILL.md that encodes your specific board structure, your workstream categories, your CEO's decision patterns. Not "how to do operations management" but "how to pull Sarah's marketing status from the Design Sprint board and flag it yellow if no card moved in 48 hours."

One skill. Built for one problem. Connected to real data. That's worth more than twelve generic ones.

What actually works: the mini-project approach

Based on what we've built for clients and for ourselves, the pattern that delivers results is what I call mini-projects: self-contained, goal-oriented builds that solve one problem end-to-end.

A mini-project has:

  • A clear objective you can explain in one sentence ("Automate my daily team status brief")
  • A working result within days, not weeks
  • Real integrations connected to your actual tools (not sample data)
  • A feedback loop so you can iterate once you see it working

Compare that to a skill pack:

Skill Pack
  • 12 generic skills
  • Teaches frameworks
  • "Learn about operations"
  • Self-serve, figure it out
  • Day 1: impressive. Day 30: forgotten
Mini-Project
  • 1 targeted build
  • Delivers a working system
  • "Here's your ops dashboard, running"
  • Guided setup, connected to your tools
  • Day 1: working. Day 30: essential

The mini-project approach works because it aligns with how people actually adopt new tools. You don't read a manual cover-to-cover and then start using the product. You pick the one thing you need most, get it working, build confidence, then expand.

Our own business runs on this principle. We didn't start with twelve skills. We started with one: a cold outreach pipeline that researches prospects, drafts messages, and tracks responses. That worked, so we added a discovery call prep skill. Then a proposal generator. Each one built on the last, and each one solves a problem we actually had.

Six months in, we have eleven skills and sixteen rules. But we built them one at a time, each one earning its place by solving a real problem.

The real differentiator isn't knowledge, it's integration

There are now over 87,000 Claude Code skills indexed on public directories. Many are free. Newsletter creators, Substack writers, and indie developers are publishing skill packs as lead magnets. On the enterprise side, consulting firms charge $1,500+ per engagement for custom Claude Code environments.

The market is splitting into two camps:

Knowledge packagers: creators who bundle frameworks, best practices, and templates into downloadable skill packs. Their business model is content (subscriptions, audience, ads). The skills are a distribution mechanism.

Integration builders: consultants and developers who connect Claude Code to your actual operational stack and build working systems. Their business model is implementation.

Both have value. But they solve different problems. If you need to learn how fundraising works, a well-structured fundraising skill with Sequoia's pitch deck framework is useful. If you need to actually track your fundraise - investor pipeline, follow-up cadence, term sheet comparisons, data room organization connected to your actual DocSend and CRM - you need someone to build that.

The skills marketplace will keep growing. Prices will keep falling. Generic knowledge skills will trend toward free because the model already knows most of what they contain.

What won't commoditize is the work of connecting Claude Code to your Trello board, your NocoDB instance, your n8n workflows, your email system, and making all of it work together to solve your specific operational bottleneck. That's where the value is, and that's what we focus on.

The practical takeaway

If you're a founder or operator looking at Claude Code skills:

  1. Don't start by collecting skills. Start by identifying your biggest operational bottleneck.
  2. Be specific. "I need better ops" is a category. "I spend 2 hours/day chasing status updates across 4 tools" is a problem you can solve.
  3. Build one thing that works. Get it running on real data, connected to your real tools. Use it for a week. Then decide what to build next.
  4. Skills are prompts, not magic. If the skill is just encoding knowledge the model already has, it's not adding much. Skills that encode your context, your data, your workflows - those are the valuable ones.
  5. Don't confuse having tools with using them. Twelve skills in a folder is a library. One skill connected to your daily workflow is infrastructure.

The founder's skill pack my client sent me was well-made. I respect the work that went into it. But for his specific situation - coordinating a distributed team through a Trello board - it wouldn't have moved the needle. What will move the needle is a working CEO command center that pulls live data and surfaces decisions every morning.

Start with the problem. Build backward from the result you need. Add complexity only when you've outgrown simplicity.

That's the approach.

Want to know where you stand?

Take our AI readiness quiz. 10 questions, 2 minutes. You'll get a personalized recommendation for where to start with automation.