Setting Up on a Codebase
How to configure Claude Code for effective work on existing projects. Get your CLAUDE.md right, set up verification loops, and create team-wide conventions that compound over time.
The Setup Checklist
Before diving into a new codebase with Claude Code, follow this sequence. It takes about 15 minutes. The productivity difference over a week is hours.
Creating an Effective CLAUDE.md
The CLAUDE.md file is Claude's introduction to your codebase. It's loaded automatically at the start of every session, making it the highest-leverage configuration point you have. Instructions here get followed more reliably than anything you type in chat.
The Hierarchy
CLAUDE.md files stack in a hierarchy. More specific files override general ones:
Your personal defaults. Applies to all projects. Good for your preferred style, common patterns you always use.
Project root. Shared with your team via git. The main configuration file.
Personal project overrides. Gitignored. Local paths, personal preferences.
Child directories can have their own files for specific subsystems.
What to Include
Build & Test Commands
## Build Commands
npm run build # Full production build
npm run dev # Development server on port 3000
npm test # Run test suite
npm test -- --watch # Watch mode for TDD
Claude needs to know how to build, test, and run your project. Be explicit about flags and options.
Key File Locations
## Key Locations
- Authentication: src/auth/ (not src/users/)
- API routes: src/routes/api/
- Database models: src/models/
- Shared utilities: src/lib/
Point out locations Claude can't infer from folder names. Especially important when conventions are non-obvious.
Project-Specific Gotchas
## Gotchas
- The legacy API in /v1 is deprecated but still receives traffic
- Never import directly from src/internal/. Use the public API
- Tests require DATABASE_URL to be set (use .env.test)
- The CI uses Node 18, not 20
Things that look right but are wrong in this codebase. Save Claude from making mistakes you've already made.
Branch & Commit Conventions
## Git Conventions
- Branch naming: feature/JIRA-123-description
- Use conventional commits (feat:, fix:, chore:)
- Rebase onto main before merging
- PRs require passing CI and one approval
What to Leave Out
Models naturally try to use all information they're given, even when irrelevant. A bloated CLAUDE.md doesn't just waste tokens; it actively degrades performance by introducing distractors.
Never send an LLM to do a linter's job. ESLint, Prettier, Black: these are faster, cheaper, and more reliable. Claude will read your existing code and match patterns anyway.
If your folder is named components, you don't need to explain it contains components. Only document non-obvious structure.
"Write clean code" or "follow best practices" waste tokens and don't change behavior. Be specific or skip it.
Don't paste full API docs. Point to them during specific tasks instead of loading everything into every session.
Effective CLAUDE.md files focus on essential information only: no redundancy, high signal-to-noise ratio. If yours has grown bloated with every possible instruction, it's probably making Claude worse. Run /init to generate a starting point, but treat it as a draft to refine, not a finished product.
Understanding the Codebase with Claude's Help
Before writing code, have Claude explore and map the terrain. This investment pays off every time you ask Claude to make changes.
Initial Exploration
Start with the big picture:
Analyze this project's structure. What are the main modules, what patterns does it use, and how do they interact? Don't write any code.
Understand key flows:
Trace the request flow from API endpoint to database for creating a new user. Walk me through each file it touches.
Find the conventions:
Look at 3-5 existing components in this codebase. What patterns do they follow? How should new components be structured to match?
Creating Documentation for Claude
If your codebase lacks documentation, create guidance documents that help Claude navigate:
Text-based logic flows. How data moves through the system. Where the boundaries are.
Structured data types. API contracts. Database schemas.
Explains "why" to the agent. Past decisions and their reasoning.
Use subagents to keep exploration out of your main context. 'Use a subagent to investigate how the session management works' returns just the conclusions, keeping your working context clean.
Multi-Session Workflows
Complex tasks often span multiple sessions. Context resets, you close your laptop, or auto-compaction summarizes your history. Without preparation, you lose progress. Here's how to make work survive context boundaries.
Plan Files
For any task that might take multiple sessions, have Claude write the plan to a markdown file:
# plan.md - Authentication Migration
## Goal
Replace JWT-based auth with session-based auth
## Steps
1. [x] Create session store (src/lib/sessions.ts)
2. [x] Update login endpoint
3. [ ] Update middleware to check sessions
4. [ ] Migrate existing users
5. [ ] Remove JWT dependencies
## Current Status
Completed steps 1-2. Next: middleware update.
## Notes
- Session TTL: 24 hours
- Redis for session storage in production
Start each new session by pointing Claude at the plan file. The plan persists even when context doesn't.
Set up a plan file:
We're going to work on [task]. Create a plan.md file with the goal, steps, current status, and any important notes. We'll use this to track progress across sessions.
Git Checkpoints
Commit working code before moving to the next step. If something goes wrong, you can revert to a known-good state:
# After completing each logical piece
git add -A && git commit -m "feat: complete session store implementation"
# If the next step goes wrong
git revert HEAD # or reset to the working commit
Small, frequent commits create recovery points. If Claude tangles something, you lose minutes instead of hours.
Treat each completed step as a checkpoint. Commit before experimenting. The cost of extra commits is near zero; the cost of lost work is high.
Setting Up Verification
Give Claude ways to verify its own work. This feedback loop will 2-3x the quality of results.
Test Coverage
Configure Claude to run tests after changes. In your CLAUDE.md:
## After Code Changes
Always run: npm test -- --related
Before considering any PR complete: npm test
Set up TDD workflow:
Write tests for the feature I'm describing. Cover the happy path and edge cases. Don't implement yet. Let's get the tests right first.
Linting & Formatting
Never rely on Claude to follow style guides. Use automated tools:
## Code Style
Run `npm run lint` before considering any change complete.
Format with `npm run format` after making changes.
Use hooks to automatically run lint and format after edits. This shifts craft from writing perfect code to building reliable systems around AI output.
// .claude/settings.json
{
"hooks": {
"PostToolUse": [
{
"tool": "edit",
"command": "npm run lint:fix && npm run format"
}
]
}
}
CI Integration
Reference your CI configuration so Claude knows what checks must pass:
## CI Requirements
All changes must pass the checks in .github/workflows/ci.yml:
- Linting (eslint)
- Type checking (tsc --noEmit)
- Tests (npm test)
- Build (npm run build)
Visual Verification
For UI work, give Claude a way to see its own output. By default, Claude can't view what it builds. A browser automation MCP server closes this loop.
Playwright MCP lets Claude take screenshots, navigate pages, and verify visual output matches expectations:
// .mcp.json
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@anthropic/mcp-playwright"]
}
}
}
With visual verification enabled, Claude can iterate on UI work the way a human developer would: build, view, adjust, repeat.
Set up visual feedback:
Take a screenshot of the current state. Compare to this target [paste image]. What's different? Fix it and screenshot again.
Permission Configuration
Control what Claude can do. Match permission levels to the task and your risk tolerance.
The Autonomy Dial
Default permissions. Claude asks before any mutation. Good for unfamiliar codebases, production environments, or learning the tool.
Custom allowlist via .claude/settings.json. Permit file edits and common commands. Block destructive operations. Where most users land.
The --dangerously-skip-permissions flag. Claude runs uninterrupted. Daily driver for many Anthropic engineers in version-controlled repos.
Configuring Permissions
Your options for allowing tools:
- During session: When Claude asks permission, select "Always allow" for tools you trust
- Via /permissions command: Add specific tools interactively
- In .claude/settings.json: Configure once, check into git, share with team
- CLI flag:
--allowedToolsfor session-specific permissions
Recommended Allowlist
A reasonable starting configuration for most projects:
{
"permissions": {
"allowedTools": [
"Edit",
"Write",
"Bash(npm test)",
"Bash(npm run lint)",
"Bash(npm run build)",
"Bash(git add *)",
"Bash(git commit *)",
"Bash(git status)",
"Bash(git diff *)"
],
"deniedTools": [
"Bash(rm -rf *)",
"Bash(git push --force *)",
"Bash(git reset --hard *)"
]
}
}
Many power users create a shell alias: alias cc="claude --dangerously-skip-permissions". The practical risk is low in a version-controlled codebase where you can revert anything.
Team Conventions
For teams, create shared configuration that compounds your collective learning.
Shared CLAUDE.md
When Claude does something incorrectly, add it to CLAUDE.md so it knows not to repeat the mistake. This creates institutional memory:
## Team Standards
- All PRs require tests for new functionality
- Use conventional commits (feat:, fix:, chore:, docs:)
- Maximum function length: 50 lines
- No TODO comments (file issues instead)
## Past Mistakes to Avoid
- Don't use the deprecated UserService class. Use AuthService
- The Redis client must be imported from src/lib/redis, not redis package directly
- Tests in /e2e require the dev server running on port 3000
Boris Cherny tags @.claude on coworkers' PRs to add items to CLAUDE.md as part of the PR. This transforms code reviews into meta-work that improves the development system.
Shared Slash Commands
Create project-specific commands in .claude/commands/:
# .claude/commands/fix-issue.md
# Fix Issue $ARGUMENTS
1. Run `gh issue view $ARGUMENTS` to understand the issue
2. Search the codebase for relevant files
3. Implement the fix
4. Write tests if appropriate
5. Commit with a message referencing issue #$ARGUMENTS
Now anyone on the team can run /project:fix-issue 1234.
Shared MCP Configuration
Configure MCP servers in .mcp.json for team-wide access to tools:
{
"mcpServers": {
"postgres": {
"command": "mcp-postgres",
"args": ["--connection-string", "$DATABASE_URL"]
},
"sentry": {
"command": "mcp-sentry",
"args": ["--token", "$SENTRY_TOKEN"]
},
"playwright": {
"command": "npx",
"args": ["@anthropic/mcp-playwright"]
}
}
}
Each MCP server adds tool definitions to your context window. Enable what you need, disable what you don't. Check with /context.
Instrumenting for Quality
Set up visibility into how AI-assisted development affects your codebase over time.
What to Monitor
Are we maintaining coverage? AI tends to add code faster than tests if you're not careful.
Is code quality stable? Watch for new categories of warnings.
Are changes staying manageable? Large PRs are harder to review and more likely to have issues.
Agents tend to accrete code without automatic refactoring. Files grow to thousands of lines without intervention.
Code Health Reviews
Steve Yegge recommends spending 30-40% of your time on code health. Schedule regular reviews:
Regular code health check:
Review this codebase for code smells: files over 500 lines, areas with low test coverage, duplicated logic, dead code, debug cruft. File issues for anything that needs followup.
The 40% Rule
Without active maintenance, AI-assisted codebases accumulate invisible technical debt that slows your agents down. Budget time for:
- Finding and filing issues for code smells
- Breaking up large files (>500 lines)
- Consolidating redundant systems
- Cleaning up debug cruft, old docs, build artifacts
- Refactoring before major expansions
In AI development environments, it can often be cheaper to rebuild from scratch than to try to fix what's broken. Don't be shy about reverting to the last stable state.