Using It But Struggling
Core practices for developers who've tried AI coding tools but aren't getting consistent results. Most problems trace back to a few structural issues, not the model, not your prompts.
The governing rule:
If you don't control context, you don't control outcomes.
The Core Problem
Most struggles with AI coding tools come from treating Claude like a chatbot that happens to edit files. It's not. Claude is an autonomous agent operating in your codebase with a limited context window, a constrained set of tools, and permission gates you control.
When something goes wrong, the instinct is to blame the model or try better prompts. Usually, the real issue is structural: what information Claude has, how much context is accumulated, and whether you've set up verification loops.
When Claude produces bad output, ask: What context did I give it? How many turns have we been going? Can Claude verify its own work? The answer is almost always in one of those three areas.
If you're struggling, you might be overthinking it. The best way to build intuition is to try things. Start a session, give Claude a small task, see what happens. Experiment without fear (you have version control). Real learning happens through doing, not reading about doing.
Context Management
The most common cause of poor AI output is poor context. Claude doesn't know what you know. You have to tell it. But there's a crucial balance: too little context and Claude guesses wrong; too much and it gets overwhelmed.
The CLAUDE.md File
Create a CLAUDE.md file in your project root. This is the highest-leverage configuration point you have. Instructions here get followed more reliably than anything you type in the conversation.
What to include
- Build and test commands (
npm run build,pytest -xvs) - Branch naming conventions and merge preferences
- Key file locations Claude can't infer
- Project-specific gotchas and constraints
- Environment setup quirks
What to leave out
- Style guides (let linters handle that)
- Obvious folder descriptions
- Generic instructions like "write clean code"
- Entire API documentation
- Anything over 50-100 lines total
Models naturally try to use all information they're given, even when irrelevant. A bloated CLAUDE.md doesn't just waste tokens; it actively degrades performance by introducing distractors. Keep it lean.
Context Commands
Three commands you need to internalize:
/context
Shows what's consuming your context window. Run this when Claude seems slow or unfocused. You'll often find old tool results, irrelevant file contents, or stale history eating your token budget.
/compact
Summarizes conversation history to free up tokens while preserving key information. Auto-compaction triggers around 80% capacity. Use /compact sparingly and only when your next steps are very clear, as compaction loses nuance. Starting a fresh session is often better.
/clear
Wipes conversation history completely. Use between unrelated tasks. A debugging session and a feature build shouldn't share context. Clear constantly, every 20-30 minutes of active work.
Check your current context usage:
/context
The Workflow That Works
Explore โ Plan โ Code โ Commit
The default workflow for any non-trivial task. Skipping steps is the main source of wasted work.
Explore
Have Claude read and understand relevant files first. The explicit "don't write code" matters. Without it, Claude jumps to implementation.
Read src/auth/ and understand how authentication currently works. Don't write code yet.
Plan
Ask Claude to propose an approach before coding. Review it, push back, ask questions, refine. The words "think hard" actually allocate more reasoning compute.
Now propose how you'd implement OAuth support. Think hard about edge cases.
Code
Only after the plan looks good: execute. Generate code in small, verifiable chunks. Don't try to implement everything at once.
Implement step 1 from your plan: the OAuth callback handler.
Commit
Review, test, and commit before moving to the next piece. Small commits are your undo mechanism. If something goes wrong later, you can always revert.
Run the tests for the files you modified. If they pass, commit with a descriptive message.
Use Shift+Tab twice to enter Plan mode. Go back and forth with Claude until satisfied with its plan, then switch to auto-accept mode. Claude can often 1-shot the implementation from there.
The Context Cliff
This is one of the most important things to understand: agent performance degrades as context fills up. Use /statusline to monitor your context buffer and plan to start fresh sessions around 60%.
Symptoms of an overloaded context:
- Claude repeats itself
- Instructions get ignored or "forgotten"
- Claims to have done things it didn't
- Quality of code decreases
- Context feels "muddy"
If you're hitting these symptoms after 15-20 turns on a single task, the task itself is probably too large. This isn't a soft guideline. Effective users treat context as a constraint and design around it.
Reset proactively
If you've been going back and forth for 15+ turns on a single task, something is wrong. Either the task needs decomposition, or you need to start fresh with a clear summary of where you are.
Externalize state
Use commits, checklists, or markdown files to track progress. If context resets, you have a record of what's done and what's left. The checklist becomes documentation and recovery point.
Decompose before you start
If a task will obviously consume significant context, break it into independent subtasks before starting. Use /statusline to monitor usage. Each subtask should complete well before 60% capacity.
For long tasks, start with a checklist:
Create CHECKLIST.md with every file that needs updating for this migration. We'll work through items one by one.
Verification Loops
"Probably the most important thing: give Claude a way to verify its work."
This feedback loop will 2-3x the quality of results. Without it, you're flying blind and hoping Claude got it right.
Types of Verification
Automated Tests
The gold standard. Claude excels when it has a clear target to iterate against. Each red-to-green cycle gives immediate feedback.
Visual Feedback
For UI work, screenshots are essential. Claude's visual accuracy is uneven: sometimes perfect, sometimes surprisingly off. After 2-3 iterations with visual feedback, results converge to what you need.
Separate Reviewer
Use a second Claude instance (or a fresh context) to review the first one's work. Separate context matters. Claude reviewing its own work in the same session tends to defend its choices.
For backend logic: Write tests first โ confirm they fail โ implement until green โ commit tests and implementation separately. This protects you from Claude solving the wrong problem.
Common Failure Patterns
These are the structural issues that cause most problems. Each has a mechanical fix. It's not about better prompts.
Premature Execution
Context Contamination
/clear between unrelated tasks. Clear every 20-30 minutes on heavy usage. The cost of re-establishing context is lower than degraded performance.
CLAUDE.md Bloat
One-Shot Syndrome
Over-Specification
Ignoring Terminal Output
Not Using Subagents
Quick Recovery Checklist
When things aren't working, run through this checklist:
/context. If you're over 70%, use /compact.
/statusline to see your context buffer. If you're past 60%, consider starting a fresh session with /clear.
/plan or Shift+Tab twice to enter planning mode. Have Claude propose an approach before executing. Often reveals the root issue.
Commit good results before moving to the next step. If something goes wrong, you can revert to a known-good state. Small commits create checkpoints that make complex work recoverable.
Sometimes the fastest path forward is: revert all changes, /clear, and start the task fresh with lessons learned. Rebuilding from scratch is often cheaper than trying to fix a tangled mess.
Practice Makes Proficient
If you've read this far but haven't been experimenting along the way, now is the time. Open a terminal, start Claude, and try something. The gap between struggling and proficient is usually just practice.
Low-stakes experiments
- Ask Claude to explain code you already understand
- Have it add comments to a function
- Request a small refactor you can easily verify
- Write a test for existing functionality
Safe to fail
- Work on a branch you can delete
- Commit before experimenting
- Keep tasks small and reversible
- Use
/clearliberally between attempts
The "Try This" boxes throughout this guide aren't decorative. Each one is a practical starting point for building the intuition that makes everything else click.