Proficient, Want New Tips
Power-user patterns and advanced techniques for developers already comfortable with AI-assisted coding. Each tip is battle-tested by developers shipping production code daily.
Parallel Claude Instances
Run 5-10 Claude instances simultaneously for different purposes. Boris Cherny (creator of Claude Code) runs 5 instances locally in numbered terminal tabs, plus 5-10 additional sessions on claude.ai/code. He uses system notifications to know when Claude needs input and treats AI as schedulable capacity, optimizing for attention allocation rather than generation speed.
claude. Enable system notifications so you know when any instance needs attention. Start mobile sessions for async tasks.
Git Worktrees for Isolation
Git worktrees let you have multiple branches checked out simultaneously in different directories. Each worktree gets its own Claude instance with completely isolated changes. Instance 1 refactors the auth system while Instance 2 builds a dashboard component. No merge conflicts, no cross-contamination.
# Create worktrees for parallel work
git worktree add ../project-feature-a feature-a
git worktree add ../project-feature-b feature-b
git worktree add ../project-bugfix hotfix/login-issue
# Open each in a new terminal, run Claude in each
cd ../project-feature-a && claude
Subagents for Exploration
When you need to research something, spawn a subagent instead of polluting your main context. The subagent explores, takes notes, and returns a summary, keeping your main session clean and focused. Boris Cherny uses specialized subagents like code-simplifier and verify-app to automate common workflows.
Ask Claude to spawn a subagent for research:
Use a subagent to find all places in this codebase where we handle authentication errors. Summarize the patterns you find.
Slash Commands as Infrastructure
Create custom slash commands for repetitive workflows. Put a markdown file in .claude/commands/ and it becomes available as /project:filename. Boris Cherny uses /commit-push-pr dozens of times every day. Commands are checked into git so your whole team shares them.
# .claude/commands/fix-issue.md
# Fix Issue $ARGUMENTS
- Run
gh issue view $ARGUMENTS to understand the issue
- Search the codebase for relevant files
- Implement the fix
- Write tests if appropriate
- Commit with a message referencing issue #$ARGUMENTS
/project:fix-issue 1234 handles the entire workflow. Commands can include inline bash to precompute information. Personal commands go in ~/.claude/commands/.
MCP Servers for Tool Extension
MCP (Model Context Protocol) connects Claude to external tools and data sources. Without it, Claude is limited to filesystem and shell. With it, Claude can query databases, control browsers, post to Slack, pull Sentry errors, and more. Configurations are shared in .mcp.json.
/context.
Visual Verification with Playwright
Claude can't see UI output by default. A Playwright or Puppeteer MCP server lets Claude view its own frontend work, take screenshots, and iterate until visual output matches your expectations. This closes the feedback loop for UI development.
// .mcp.json - Playwright MCP setup
{
"mcpServers": {
"playwright": {
"command": "npx",
"args": ["@anthropic/mcp-playwright"]
}
}
}
Extended Thinking Triggers
Claude Code allocates more reasoning compute when you include the word "think" in your prompt. Different phrases trigger different thinking budgets. This is a Claude Code specific feature that doesn't apply to the chat interface or API.
# Quick fix - no trigger needed
"Fix the null pointer exception in auth.js"
# Architecture decision - maximum thinking
"Ultrathink about how to structure this API. What are
the tradeoffs between REST and GraphQL for our use case?"
# Debugging - moderate thinking
"Think hard about why this race condition occurs."
The Rule of Five
Jeffrey Emanuel discovered that agents produce their best work after 4-5 review iterations. First outputs are drafts. Have Claude review its own proposals and implementations repeatedly. Each review should be slightly broader than the last. After 4-5 passes, the agent will say "this is as good as we can make it." That's when you can moderately trust the output.
After Claude implements something:
Review this implementation for bugs, edge cases, and security issues. Then do a second review looking at architecture and design. What could be simpler?
Writer-Reviewer Split
For critical code, use a two-phase approach with separate context. Claude reviewing its own work in the same session tends to defend its choices. A fresh instance with no investment in the code reviews more critically. This catches issues the writer might rationalize away.
Claude A writes code with full creative freedom
/clear or fresh instance. Claude B reviews with strict criteria
Give the reviewer instance a critical persona: 'You are a senior engineer reviewing a junior developer's code. Find problems. Be skeptical of claims that something works.'
Checklist-Driven Migrations
For large tasks with many steps (migrations, multi-file refactors, launch checklists), have Claude write a checklist to a markdown file. The file serves as progress tracking and a recovery point if context resets. If Claude hits the 20-turn cliff, the checklist shows what's done and what's left.
## Migration Checklist
- [x] Update imports in src/components/*
- [x] Migrate state management in src/stores/*
- [ ] Update tests in src/__tests__/*
- [ ] Update documentation
- [ ] Run full test suite
- [ ] Manual QA pass
Start a large migration:
Create CHECKLIST.md with every file that needs updating for this migration. Work through items one by one, checking them off as you go.
Plan Mode First
Boris Cherny starts most sessions in Plan mode (Shift+Tab twice). He goes back and forth with Claude until satisfied with its plan, then switches into auto-accept edits mode. Claude can usually one-shot the implementation from there. Planning prevents unwanted changes by establishing intent and constraints before execution.
plan.md). Plans in files persist across context resets. Start each new session by reading the plan file to restore context.
Verification Loops
"Probably the most important thing: give Claude a way to verify its work." (Boris Cherny) This feedback loop will 2-3x the quality of results. Tests, screenshots, visual comparisons: anything that lets Claude iterate until it meets standards. You don't trust; you instrument.
For visual work:
Take a screenshot of the current state. Compare to this target [paste image]. What's different? Fix it and screenshot again.
40% Time on Code Health
Steve Yegge recommends spending 30-40% of your time on code health. Have agents conduct regular code inspections: large files needing refactoring, low test coverage, duplicated systems, legacy code, dead code. Without this, you rapidly accumulate invisible technical debt that slows your agents down.
Schedule regular code health reviews:
Review this codebase for code smells: large files, low coverage, duplicated logic, dead code. File issues for anything that needs followup.
PostToolUse Hooks
Use hooks to handle final formatting details automatically. Boris Cherny uses PostToolUse hooks to prevent CI errors by running lint, format, and typecheck after edits. This shifts craft from writing perfect code to building reliable systems around AI output.
// .claude/settings.json
{
"hooks": {
"PostToolUse": [
{
"tool": "edit",
"command": "npm run lint:fix && npm run format"
}
]
}
}
/permissions. Share permission configurations with your team.
The Common Thread
All these techniques share one philosophy: treat Claude as infrastructure, not magic. Optimize for systems that reliably produce needed outputs, not just better individual responses. Context orchestration, not prompt engineering, is the game.