Good documentation is multiplying work: write it once, every user of your API, library, or service benefits. Bad documentation is also multiplying work: every confused developer creates a support burden. Claude Code generates documentation that stays close to the code — API reference extracted from types and implementations, READMEs that explain the why not just the what, and architecture records that capture decisions before the context is lost.
API Reference from Code
Generate OpenAPI 3.1 documentation from our Express route handlers.
Include all request/response schemas, auth requirements, and examples.
Claude Code reads your route handlers, request validation schemas, and response types, then generates accurate OpenAPI documentation:
// routes/orders.ts — well-typed route that Claude Code can document
import { Router } from 'express';
import { z } from 'zod';
import { authenticate } from '../middleware/auth';
/**
* @openapi
* /orders:
* post:
* summary: Create a new order
* tags: [Orders]
* security:
* - bearerAuth: []
* requestBody:
* required: true
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/CreateOrderRequest'
* example:
* items:
* - productId: "550e8400-e29b-41d4-a716-446655440000"
* quantity: 2
* shippingAddressId: "7c9e6679-7425-40de-944b-e07fc1f90ae7"
* responses:
* 201:
* description: Order created successfully
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/Order'
* 400:
* $ref: '#/components/responses/ValidationError'
* 401:
* $ref: '#/components/responses/Unauthorized'
* 422:
* description: Insufficient inventory or payment failed
* content:
* application/json:
* schema:
* $ref: '#/components/schemas/AppError'
*/
router.post('/orders', authenticate, validateBody(CreateOrderSchema), createOrderHandler);
Auto-generation without JSDoc:
Read src/routes/ and src/schemas/. Generate a complete OpenAPI 3.1 spec.
Infer request bodies from Zod schemas, response types from TypeScript return types.
Include error responses based on the AppError codes in src/constants/errorCodes.ts.
Claude Code reads the type system and generates accurate specs — more reliably than hand-written JSDoc that diverges from the actual implementation.
README Structure That Works
Our README has 800 lines but developers still can't figure out
how to run the project. Rewrite it following standard structure.
# order-service
Processes customer orders: payment validation, inventory reservation, shipping coordination.
## Quick Start (5 minutes)
```bash
# Install dependencies
npm install
# Set up environment
cp .env.example .env
# Edit .env — set DATABASE_URL and STRIPE_SECRET_KEY
# Run database migrations
npm run migrate
# Start dev server
npm run dev
# → API at http://localhost:3000
# → API docs at http://localhost:3000/docs
Development
Prerequisites
- Node.js 20+
- PostgreSQL 15+ (or run
docker compose up dbfor local DB)
Common Commands
| Command | Description |
|---|---|
npm run dev | Start dev server with hot reload |
npm test | Run all tests |
npm run test:watch | Tests in watch mode |
npm run lint | Check code style |
npm run build | Build for production |
npm run migrate | Run pending DB migrations |
npm run migrate:create <name> | Create a new migration |
Project Structure
src/
routes/ HTTP route handlers (thin layer — parse req/res only)
services/ Business logic
repositories/ Database queries
schemas/ Zod validation schemas
middleware/ Auth, error handling, logging
db/
migrations/ SQL migration files (001_create_users.sql, ...)
tests/
integration/ Tests that hit a real database
unit/ Tests that don't require external services
Architecture
The HTTP layer is thin — routes call services, services call repositories. No SQL outside repositories, no business logic in routes.
Key architectural decisions are documented in docs/decisions/. The most relevant for new contributors:
API Reference
API documentation available at /docs when running locally. Production docs: https://docs.api.company.com/order-service
Deployment
See deployment guide. CI/CD pipeline: pushes to main deploy to staging automatically; production requires manual approval in GitHub.
Contributing
See CONTRIBUTING.md. Before submitting a PR: run npm test && npm run lint. Tests must pass, linting must be clean.
The structure: quick start (can copy-paste), common commands (one place), architecture overview, then links to more. Developers can be productive in 5 minutes.
## Architecture Decision Records
We just decided to migrate from REST to GraphQL. Before we lose the context on why, document it as an ADR.
```markdown
<!-- docs/decisions/015-graphql-migration.md -->
# ADR-015: Migrate API from REST to GraphQL
## Status
Accepted (2026-09-18)
## Context
- 12 different dashboard clients each need different subsets of the same entities
- REST overfetching: the mobile client receives 40+ fields per order but displays 6
- Three recent bugs were caused by REST clients not updating when we added required fields
- Dashboard team spent 2 weeks building their own aggregation layer to combine 5 REST endpoints
## Decision
Migrate the customer-facing API from REST to GraphQL.
Keep REST endpoints for:
- Stripe webhook handlers (external system, not our clients)
- Health checks (/healthz, /readyz)
- Admin operations used by scripts (simpler tooling)
## Consequences
**Positive:**
- Clients fetch exactly what they need — eliminates overfetching
- Schema is the contract — breaking changes are caught at schema check time
- N+1 query risk: requires DataLoader from day 1 (see ADR-016)
- Dashboard team eliminates their aggregation layer
**Negative:**
- REST is universally understood; GraphQL requires learning
- Caching: per-URL HTTP caching doesn't apply to GraphQL POST requests
(mitigation: persisted queries + CDN caching by operation hash)
- File uploads: GraphQL multipart spec is awkward (keep REST for uploads)
**Risks:**
- Junior devs unfamiliar with GraphQL resolver patterns may create N+1 bugs
(mitigation: DataLoader lint rule, code review checklist)
## Alternatives Considered
**gRPC**: Strong typing and performance, but poor browser support without extra proxy.
Generated clients are the benefit, but GraphQL subscriptions cover our real-time needs.
**REST + OpenAPI code generation**: Solves the contract problem but not overfetching.
Dashboard team still needs their aggregation layer.
## Implementation Plan
1. Stand up GraphQL endpoint alongside REST (phase: parallel run)
2. Migrate dashboard clients one by one
3. Deprecate REST endpoints (90-day notice to external consumers)
4. Remove REST after all clients migrated
Runbooks for Operations
Write a runbook for the most common incident: high error rate on /orders endpoint.
<!-- docs/runbooks/high-order-error-rate.md -->
# Runbook: High Error Rate on /orders
**Alert**: `OrderErrorRate > 5% for 5 minutes`
## Immediate Actions (first 5 minutes)
1. **Check the error type** in Grafana → Order Service → Error Breakdown
- `PAYMENT_DECLINED` spikes: likely Stripe incident → check status.stripe.com
- `DATABASE_CONNECTION` errors: DB pool exhausted → skip to step 4
- `INVENTORY_UNAVAILABLE`: expected during high traffic — monitor but no action needed
2. **Look at recent deploys** — was there a deploy in the last 30 minutes?
```bash
gh run list --workflow=deploy.yml --limit=5
If yes: check if error rate started after deploy. If correlated: roll back.
-
Check downstream services
- Stripe: status.stripe.com
- Internal payment service:
kubectl get pods -n payments - Inventory service:
kubectl get pods -n inventory
-
Database connection pool
-- Run on primary DB SELECT count(*), state FROM pg_stat_activity WHERE datname = 'orders' GROUP BY state;If
waitingcount > 10: DB pool exhausted. Scale the order service:kubectl scale deployment order-service --replicas=5
Escalation
- 5 min: No obvious cause → page @platform-oncall
- 15 min: Service degraded → notify #incidents, begin customer communication
- 30 min: Full outage → incident commander, bridge call
Recovery Verification
Error rate should drop within 2 minutes of fix. Verify:
- Grafana: OrderErrorRate returns to < 1%
- Check 5 sample orders succeeded end-to-end
- Post incident summary in #incidents within 24h (template: docs/incident-template.md)
## Keeping Docs in Sync
We keep updating the code but forgetting to update the docs. How do I make doc updates part of the development workflow?
Add documentation checks to your CI pipeline:
```yaml
# .github/workflows/docs.yml
- name: Check API docs are current
run: |
# Regenerate OpenAPI spec from code
npm run generate:openapi
# Fail if spec changed but wasn't committed
git diff --exit-code docs/openapi.json || \
(echo "API spec is out of sync. Run 'npm run generate:openapi' and commit." && exit 1)
And in CLAUDE.md:
## Documentation Rules
- New API endpoints: add OpenAPI JSDoc comment (see existing routes for format)
- New architectural decisions: create ADR in docs/decisions/ BEFORE implementing
- If changing behavior described in a runbook: update the runbook in the same PR
- After each sprint: update CHANGELOG.md with user-visible changes
The CLAUDE.md rule means Claude Code will prompt you to update documentation when it touches relevant code.
For the developer experience tooling that includes documentation generation as part of the platform, see the platform engineering guide. For API documentation generated from Protobuf schemas instead of OpenAPI, the Protocol Buffers guide covers proto documentation. The Claude Skills 360 bundle includes documentation skill sets with templates for READMEs, ADRs, and runbooks. Start with the free tier to try documentation generation.