I stopped writing boilerplate. Claude generates it from a single comment
← Back
April 2, 2026Claude6 min read

I stopped writing boilerplate. Claude generates it from a single comment

Published April 2, 20266 min read

I was setting up a new FastAPI service three months ago when I realised I had typed the same boilerplate — router, pydantic model, dependency injection, error handling — for the sixth time. I opened Claude, dropped in a comment describing what I wanted, and got back 90% of what I would have written. But that was not the discovery. The discovery came a week later when I figured out why some generations were production-ready and others were generic noise. It had nothing to do with how I wrote the prompt.

The wrong mental model

Most engineers treat Claude like a better search engine. You describe what you want, it produces code, you paste it in. The quality of the output depends on the quality of the description. That model is not wrong — but it leaves most of the value on the table.

The variable that actually determines output quality is context, not prompt wording. Claude does not know what your project calls a "user", whether you use Pydantic v1 or v2, whether you validate at the router layer or the service layer, or what your error response shape looks like. Without that, it defaults to generic patterns from its training data — which are fine, but not yours.

The CONTEXT.md trick

The pattern I now use in every project is a single file: CONTEXT.md in the project root. Before I write a single prompt, I drop this file into a Claude Project (or paste it at the top of a long conversation). Here is the template I use:

markdown
# Project Context — [Service Name]

## Stack
- Python 3.12 / FastAPI 0.111
- Pydantic v2 (use model_config, not class Config)
- SQLAlchemy 2.0 async (AsyncSession, not Session)
- PostgreSQL via asyncpg
- Auth: JWT in Authorization header, decoded via get_current_user dep

## Conventions
- Routers live in app/routers/, one file per domain
- Services live in app/services/, pure functions, no HTTP concerns
- DB models in app/models/, Pydantic schemas in app/schemas/
- Error responses: {"error": "...", "code": "SNAKE_CASE_CODE"}
- All endpoints return 200 with typed response model, or raise HTTPException

## Dependency patterns
- DB: Depends(get_db) -> AsyncSession
- Auth: Depends(get_current_user) -> User
- Settings: Depends(get_settings) -> Settings (lru_cache)

## Testing
- pytest-asyncio, use AsyncClient from httpx
- Factory functions in tests/factories.py for test data
- Mock external services via unittest.mock.patch

## What to always include
- Full type annotations (no Any unless unavoidable)
- Docstrings on public functions
- Input validation via Pydantic model
- Explicit error handling with HTTPException

This file takes about ten minutes to write once per project. What it buys you is enormous.

The single-comment workflow

With CONTEXT.md loaded, I now write boilerplate by dropping a single comment into my editor and asking Claude to implement it. Here is a real example from last week:

python
# POST /subscriptions
# Create a subscription for the current user
# Body: plan_id (str), billing_cycle (monthly|annual)
# - Validate plan_id exists in DB
# - Check user does not already have active subscription
# - Create subscription record, set status=active
# - Return SubscriptionResponse

That is the entire prompt. What Claude returns, because CONTEXT.md is loaded:

python
# app/routers/subscriptions.py

from fastapi import APIRouter, Depends, HTTPException
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy import select

from app.dependencies import get_db, get_current_user
from app.models import Plan, Subscription, User
from app.schemas.subscriptions import CreateSubscriptionRequest, SubscriptionResponse

router = APIRouter(prefix="/subscriptions", tags=["subscriptions"])


@router.post("", response_model=SubscriptionResponse)
async def create_subscription(
    body: CreateSubscriptionRequest,
    db: AsyncSession = Depends(get_db),
    current_user: User = Depends(get_current_user),
) -> SubscriptionResponse:
    """Create a new subscription for the authenticated user."""
    # Validate plan exists
    plan = await db.get(Plan, body.plan_id)
    if not plan:
        raise HTTPException(
            status_code=404,
            detail={"error": "Plan not found", "code": "PLAN_NOT_FOUND"},
        )

    # Check for existing active subscription
    result = await db.execute(
        select(Subscription).where(
            Subscription.user_id == current_user.id,
            Subscription.status == "active",
        )
    )
    if result.scalar_one_or_none():
        raise HTTPException(
            status_code=409,
            detail={"error": "Active subscription already exists", "code": "SUBSCRIPTION_EXISTS"},
        )

    subscription = Subscription(
        user_id=current_user.id,
        plan_id=body.plan_id,
        billing_cycle=body.billing_cycle,
        status="active",
    )
    db.add(subscription)
    await db.commit()
    await db.refresh(subscription)

    return SubscriptionResponse.model_validate(subscription)

Notice what it got right without me asking: Pydantic v2's model_validate (not .from_orm), async SQLAlchemy with await db.execute, the project's exact error response shape, the dependency injection pattern, and the response model on the decorator. That is all from CONTEXT.md, not from my comment.

Why the comment matters too

The comment is not irrelevant — it needs to capture the business logic, not the implementation. A bad comment says "create a FastAPI endpoint that creates a subscription". A good comment says what the business rules are: validate the plan exists, prevent duplicate active subscriptions, return this shape. Claude handles the implementation; you handle the intent.

Think of it as writing a specification, not writing pseudocode.

The test generation bonus

The same pattern works for tests. After generating the router code, I add a second comment:

python
# Tests for POST /subscriptions
# - Happy path: valid plan, no existing subscription -> 200
# - Plan not found -> 404 PLAN_NOT_FOUND
# - Already has active subscription -> 409 SUBSCRIPTION_EXISTS
# - Unauthenticated request -> 401

Claude generates the full pytest-asyncio test file, using AsyncClient from httpx (because CONTEXT.md says so), pulling factories from tests/factories.py (because CONTEXT.md says so), and following the project's test conventions. I typically need to adjust one or two mock paths, but the structure is correct.

What goes in CONTEXT.md

Five sections cover 95% of what Claude needs:

Stack + versions — Claude knows Pydantic v1 and v2 exist. Tell it which one you use. Same for SQLAlchemy, ORM vs raw SQL, sync vs async. Version specifics matter more than you expect.

File/folder conventions — Where do routers live? Services? Models? This prevents Claude from generating a flat structure when you use a layered one.

Dependency patterns — Show the actual signatures of your shared dependencies. Claude will copy them exactly.

Error response shape — If your API returns {"error": "...", "code": "..."}, Claude will use that. If you use {"message": "...", "status": "error"}, say so.

Testing conventions — Test framework, async setup, factory pattern. This unlocks test generation.

Keeping CONTEXT.md up to date

The file lives in git. Whenever I change a major convention — say, switching from Pydantic v1 to v2, or adopting a new dependency pattern — I update CONTEXT.md in the same PR. It takes three minutes and means Claude stays in sync with the codebase.

For larger projects I keep separate CONTEXT files per service: auth-service/CONTEXT.md, billing-service/CONTEXT.md. Each one is specific to that service's conventions. The overhead is minimal; the payoff is that generated code rarely needs structural changes.

What I actually save

A typical CRUD endpoint in our stack — router, schema, service function, error handling, test skeleton — used to take me 20–30 minutes to write from scratch. With this pattern it takes 3–5 minutes: write the comment, review the output, adjust any business logic Claude could not infer from the comment, run the tests.

More than the time, what I save is the mental context switch. Starting from a blank file requires loading the project conventions back into my head. Starting from a comment means I stay in problem-solving mode — thinking about the business logic — rather than typing mode.

Try it today

Create a CONTEXT.md at the root of your current project. Spend 10 minutes filling in the five sections above. Load it into a Claude Project or paste it at the top of your next session. Then write your next endpoint as a comment instead of code.

The first time you see Claude use your exact dependency pattern, your exact error shape, and your exact test setup without you asking — that is the moment the mental model shifts from "AI autocomplete" to "engineer who already knows the project".

Share this
← All Posts6 min read