Every Claude Code user starts with the same question: why does /seo-audit work so much better than “please audit my site for SEO”?
The answer is that a skill isn’t just a prompt. It’s a structured Standard Operating Procedure with context, steps, examples, and output templates — all combined into a file that Claude loads before acting. When you understand how skills are built, you can create ones for your own domain that make Claude behave like an expert you trained yourself.
This guide shows you exactly how to do that.
What Is a Claude Code Skill?
A skill is a Markdown file stored in your ~/.claude/skills/ directory. Claude Code scans this directory at startup and makes every skill available as a slash command.
The filename becomes the slash command:
seo-audit.md→/seo-auditcode-review.md→/code-reviewfinancial-report.md→/financial-report
When you type the slash command, Claude loads the entire skill file as context before responding. The skill file can contain anything: persona, procedures, constraints, examples, output templates, and edge case handling.
The Anatomy of a Skill File
Here’s a minimal skill:
# Code Reviewer
You are a senior software engineer conducting a thorough code review.
## What to Review
- Correctness: Does the code do what it's supposed to do?
- Security: Any injection risks, auth issues, or data exposure?
- Performance: Any obvious bottlenecks or unnecessary work?
- Readability: Is it maintainable by someone new to the codebase?
## Output Format
Provide feedback as a numbered list, most important first.
For each issue: severity (HIGH/MED/LOW), file:line reference, and fix recommendation.
## What to Ignore
- Minor style preferences (if a linter handles it)
- Hypothetical future scenarios
- Anything not in the diff
Save this as ~/.claude/skills/code-review.md. Then /code-review is available in any Claude Code session.
The Five Components of an Effective Skill
The best skills combine five elements. Not every skill needs all five, but knowing each one lets you tune for your use case.
1. Persona
Set the role Claude should play. Be specific about the relevant knowledge and experience level:
You are a principal-level backend engineer with 12 years of PostgreSQL experience.
You've built multi-tenant SaaS databases from 0 to 100M+ records.
You understand query planning, index design, and connection pooling deeply.
Vague personas produce generic responses. Specific personas produce expert-level output.
2. Context Window Priming
Tell Claude what it already “knows” before it starts:
## Project Context
This is a B2B SaaS product. We use PostgreSQL 16 via Supabase.
All tables have RLS enabled. Every query must filter by organization_id.
We prefer raw SQL — no ORM abstractions.
Monetary amounts are stored as integers (cents, never floats).
This context runs invisibly before every invocation — Claude doesn’t need to ask you to repeat it.
3. Step-by-Step Procedure
Break the work into explicit steps. Claude follows explicit procedures far more reliably than vague instructions:
## Process
1. First, understand what the code is trying to accomplish (ask if unclear)
2. Check correctness — does the logic actually work for the expected inputs?
3. Check for security issues: injection, over-exposure, missing auth checks
4. Review query efficiency and index usage
5. Look for error handling gaps
6. Check test coverage for the changed logic
7. Write your review — critical issues first, nice-to-haves last
Numbered steps prevent Claude from skipping phases or jumping to conclusions.
4. Output Template
Define exactly what the output should look like:
## Output Format
### Summary
One paragraph describing the overall quality and key concerns.
### Critical Issues (must fix before merge)
- [SECURITY] file.ts:42 — describe the issue and the exact fix
### Improvements (should fix)
- [PERFORMANCE] file.ts:89 — describe the optimization
### Suggestions (nice to have)
- [READABILITY] explain the improvement
### Verdict
APPROVE / REQUEST CHANGES / NEEDS DISCUSSION
One sentence explaining the verdict.
An output template means every invocation returns a consistent, usable format.
5. Edge Cases and Constraints
Explicitly handle the scenarios where Claude goes off-track:
## Do Not
- Flag issues in files not part of this diff
- Suggest refactoring code outside the scope of this PR
- Add general "clean code" lectures — focus on actual issues
- Mark something as high severity without explaining the specific risk
## If the Diff Is Empty
Ask the user to share the diff before proceeding.
## If You're Unsure About Intent
Ask one clarifying question before reviewing — don't assume.
Building an Agent vs. a Skill
A basic skill describes a task. An agent is a more complex skill that coordinates multiple steps, tools, and decisions autonomously.
The key difference is the tool declarations. Skills that use tools (file reading, web search, shell commands) become agents that can act independently:
# SEO Auditor Agent
You are an SEO specialist and technical web auditor.
## Tools You'll Use
- WebFetch to retrieve the target URL and its key pages
- Bash to run Lighthouse audits: `npx lighthouse [url] --output json`
- Read to check robots.txt, sitemap.xml, and meta tags
- Write to generate the audit report
## Audit Sequence
1. Fetch the homepage and extract: title, meta description, H1s, canonical
2. Check robots.txt exists and isn't blocking key paths
3. Verify the sitemap is present and reachable
4. Run Lighthouse for performance scores
5. Check Core Web Vitals in the JSON output
6. Identify top 5 issues by estimated SEO impact
7. Write a prioritized report to `seo-audit-[domain]-[date].md`
This agent doesn’t just answer — it fetches, analyzes, runs tools, and produces a file.
Advanced Patterns
Chaining Skills with Direct Invocation
Skills can reference other skill names in their instructions. Users can chain them naturally:
# Full Launch Audit
Run the following in order, summarize all findings at the end:
1. /seo-audit — technical SEO check
2. /security-scan — OWASP top 10 review
3. /performance-check — Core Web Vitals and bundle size
4. /accessibility-check — WCAG 2.1 AA compliance
After running all four, provide a launch go/no-go recommendation.
Parameterized Skills
Accept user input via the text that follows the slash command:
# Competitive Researcher
The user will provide a company name after the slash command.
Example: `/competitor-research Stripe`
## Research Steps
1. Research the company: business model, pricing, target market, key differentiators
2. Search for recent news: funding, product launches, acquisitions, controversies
3. Find their main value propositions from their website
4. Identify their key weaknesses from customer reviews (G2, Capterra, Reddit)
5. Map their features against ours
## Output: Competitive Brief
Structured with: Overview, Pricing, Strengths, Weaknesses, Our Opportunity
When the user types /competitor-research Stripe, Claude receives both the skill and “Stripe” as context.
Domain-Specific Memory
Include lookup tables, reference data, or lookup rules directly in the skill:
# Pricing Configurator
## Our Product Tiers
| Tier | Price | Limits |
|------|-------|--------|
| Starter | $29/mo | 5 users, 1 project |
| Growth | $99/mo | 25 users, unlimited projects |
| Enterprise | Custom | Unlimited, SSO, SLA |
## Discount Rules
- Annual billing: 20% off
- Non-profits: 40% off (requires verification email)
- Startups (<2 years, <$2M raised): 50% off first year
- No stacking discounts — apply the highest applicable discount
When a user asks about pricing or wants a quote, use this table exactly.
Never quote prices not in this table or invent discounts.
Organizing Your Skill Library
As your library grows, organization matters:
~/.claude/skills/
├── dev/
│ ├── code-review.md
│ ├── security-scan.md
│ └── db-optimize.md
├── marketing/
│ ├── seo-audit.md
│ ├── ad-copy.md
│ └── competitor-research.md
└── finance/
├── budget-analysis.md
└── invoice-review.md
Subdirectories work — Claude Code recursively loads all .md files in ~/.claude/skills/. The slash command uses the filename only (not the path), so dev/code-review.md is still invoked as /code-review.
Testing Your Skills
After creating a skill, test it before deploying to your full workflow:
- Run the happy path first — give it a clean, clear input and check the output quality
- Test edge cases — empty inputs, ambiguous requests, out-of-scope queries
- Check the constraints — verify your “do not” rules are being respected
- Measure consistency — run the same input twice and compare outputs
Skills that work inconsistently usually need clearer step-by-step procedures or more specific constraints.
Starting From Proven Patterns
Building skills from scratch takes time. The fastest way to get quality skills is to start from ones that already work in production — then modify them for your specific context.
The Claude Skills 360 bundle includes 2,350+ skills across 18 domains, all structured with the patterns above: persona, context, procedures, output templates, and edge case handling. For building your first set of skills, studying how production skills are structured teaches the patterns faster than starting blank.
The free starter kit comes with 360 skills you can inspect and modify. When you’re ready to expand with the full 2,350+ skill library including 45+ autonomous agents and 12 multi-swarm orchestrators, the complete bundle is a one-time $39 with lifetime updates.
The best custom skill library is one built continuously: add a skill when you notice you’re repeating the same instructions, improve it when you get an off-target response, extend it when you find a new edge case. Give it six months and you’ll have a personal AI system that thinks exactly like you need it to.
To go deeper on the tools that make agents possible, see the Claude Code hooks guide and CLAUDE.md configuration guide — the three together form the complete picture of production Claude Code setup.