Using It But Struggling

Core practices for developers who've tried AI coding tools but aren't getting consistent results. Most problems trace back to a few structural issues, not the model, not your prompts.

The governing rule:

If you don't control context, you don't control outcomes.


The Core Problem

Most struggles with AI coding tools come from treating Claude like a chatbot that happens to edit files. It's not. Claude is an autonomous agent operating in your codebase with a limited context window, a constrained set of tools, and permission gates you control.

When something goes wrong, the instinct is to blame the model or try better prompts. Usually, the real issue is structural: what information Claude has, how much context is accumulated, and whether you've set up verification loops.

๐Ÿ’ก Diagnostic First

When Claude produces bad output, ask: What context did I give it? How many turns have we been going? Can Claude verify its own work? The answer is almost always in one of those three areas.

๐Ÿ’ก The Value of Experimentation

If you're struggling, you might be overthinking it. The best way to build intuition is to try things. Start a session, give Claude a small task, see what happens. Experiment without fear (you have version control). Real learning happens through doing, not reading about doing.


Context Management

The most common cause of poor AI output is poor context. Claude doesn't know what you know. You have to tell it. But there's a crucial balance: too little context and Claude guesses wrong; too much and it gets overwhelmed.

The CLAUDE.md File

Create a CLAUDE.md file in your project root. This is the highest-leverage configuration point you have. Instructions here get followed more reliably than anything you type in the conversation.

โœ“

What to include

  • Build and test commands (npm run build, pytest -xvs)
  • Branch naming conventions and merge preferences
  • Key file locations Claude can't infer
  • Project-specific gotchas and constraints
  • Environment setup quirks
โœ—

What to leave out

  • Style guides (let linters handle that)
  • Obvious folder descriptions
  • Generic instructions like "write clean code"
  • Entire API documentation
  • Anything over 50-100 lines total
โš ๏ธ Chekhov's Gun Effect

Models naturally try to use all information they're given, even when irrelevant. A bloated CLAUDE.md doesn't just waste tokens; it actively degrades performance by introducing distractors. Keep it lean.

Context Commands

Three commands you need to internalize:

/context

Shows what's consuming your context window. Run this when Claude seems slow or unfocused. You'll often find old tool results, irrelevant file contents, or stale history eating your token budget.

/compact

Summarizes conversation history to free up tokens while preserving key information. Auto-compaction triggers around 80% capacity. Use /compact sparingly and only when your next steps are very clear, as compaction loses nuance. Starting a fresh session is often better.

/clear

Wipes conversation history completely. Use between unrelated tasks. A debugging session and a feature build shouldn't share context. Clear constantly, every 20-30 minutes of active work.

Try This

Check your current context usage:

โฏ /context

The Workflow That Works

Explore โ†’ Plan โ†’ Code โ†’ Commit

The default workflow for any non-trivial task. Skipping steps is the main source of wasted work.

1

Explore

Have Claude read and understand relevant files first. The explicit "don't write code" matters. Without it, Claude jumps to implementation.

Read src/auth/ and understand how authentication currently works. Don't write code yet.
2

Plan

Ask Claude to propose an approach before coding. Review it, push back, ask questions, refine. The words "think hard" actually allocate more reasoning compute.

Now propose how you'd implement OAuth support. Think hard about edge cases.
3

Code

Only after the plan looks good: execute. Generate code in small, verifiable chunks. Don't try to implement everything at once.

Implement step 1 from your plan: the OAuth callback handler.
4

Commit

Review, test, and commit before moving to the next piece. Small commits are your undo mechanism. If something goes wrong later, you can always revert.

Run the tests for the files you modified. If they pass, commit with a descriptive message.
๐Ÿ’ก Plan Mode Shortcut

Use Shift+Tab twice to enter Plan mode. Go back and forth with Claude until satisfied with its plan, then switch to auto-accept mode. Claude can often 1-shot the implementation from there.


The Context Cliff

This is one of the most important things to understand: agent performance degrades as context fills up. Use /statusline to monitor your context buffer and plan to start fresh sessions around 60%.

Symptoms of an overloaded context:

If you're hitting these symptoms after 15-20 turns on a single task, the task itself is probably too large. This isn't a soft guideline. Effective users treat context as a constraint and design around it.

Reset proactively

If you've been going back and forth for 15+ turns on a single task, something is wrong. Either the task needs decomposition, or you need to start fresh with a clear summary of where you are.

Externalize state

Use commits, checklists, or markdown files to track progress. If context resets, you have a record of what's done and what's left. The checklist becomes documentation and recovery point.

Decompose before you start

If a task will obviously consume significant context, break it into independent subtasks before starting. Use /statusline to monitor usage. Each subtask should complete well before 60% capacity.

Try This

For long tasks, start with a checklist:

โฏ Create CHECKLIST.md with every file that needs updating for this migration. We'll work through items one by one.

Verification Loops

"Probably the most important thing: give Claude a way to verify its work."

This feedback loop will 2-3x the quality of results. Without it, you're flying blind and hoping Claude got it right.

Types of Verification

๐Ÿงช

Automated Tests

The gold standard. Claude excels when it has a clear target to iterate against. Each red-to-green cycle gives immediate feedback.

Try: "Write tests first for this feature. Cover the happy path and these edge cases: [list]. Then implement until tests pass."
๐Ÿ‘๏ธ

Visual Feedback

For UI work, screenshots are essential. Claude's visual accuracy is uneven: sometimes perfect, sometimes surprisingly off. After 2-3 iterations with visual feedback, results converge to what you need.

Try: "Take a screenshot of the result. Compare to the target. What's different? Fix it."
๐Ÿ”

Separate Reviewer

Use a second Claude instance (or a fresh context) to review the first one's work. Separate context matters. Claude reviewing its own work in the same session tends to defend its choices.

Try: Clear context, then: "Review this code for bugs, edge cases, and security issues. Be critical."
๐Ÿ’ก TDD Loop

For backend logic: Write tests first โ†’ confirm they fail โ†’ implement until green โ†’ commit tests and implementation separately. This protects you from Claude solving the wrong problem.


Common Failure Patterns

These are the structural issues that cause most problems. Each has a mechanical fix. It's not about better prompts.

๐Ÿƒ

Premature Execution

Symptom: Claude starts coding before understanding the problem. You end up three iterations deep solving the wrong thing.
Fix: Explicit staging. "Read the relevant files and understand the current implementation. Don't write code yet." Then: "Propose a plan." Only then: "Implement."
๐Ÿ—‘๏ธ

Context Contamination

Symptom: You debug for an hour, then switch to a new feature. Claude drags forward assumptions, references, and attention anchors from the debugging session.
Fix: /clear between unrelated tasks. Clear every 20-30 minutes on heavy usage. The cost of re-establishing context is lower than degraded performance.
๐Ÿ“‹

CLAUDE.md Bloat

Symptom: Every session starts with 1500 tokens of mostly-irrelevant instructions. Claude tries to follow all of it, even when it doesn't apply.
Fix: Keep CLAUDE.md lean: essential info only, no redundancy, high signal-to-noise ratio. Point to docs during specific tasks instead of loading them into every session.
๐ŸŽฏ

One-Shot Syndrome

Symptom: Expecting complex tasks to work on the first try. They won't. Requirements are ambiguous, edge cases emerge during implementation.
Fix: Treat first outputs as drafts. Build in verification: tests, screenshots, behavior checks. Iterate toward correctness; don't expect it immediately.
๐Ÿ“

Over-Specification

Symptom: 500-word prompts specifying exactly how to implement something. You've wasted effort on details Claude would figure out, and anchored it to your approach.
Fix: Specify the what, be loose on the how. "Add rate limiting, max 5 attempts per 15 minutes" beats a detailed implementation spec.
๐Ÿšจ

Ignoring Terminal Output

Symptom: Claude runs commands, they fail, Claude struggles to recover. You watch without reading the errors.
Fix: Read the output. You often know things Claude doesn't: wrong paths, missing env vars, version mismatches. If an issue persists for more than an hour, Claude likely misunderstood something fundamental. Revert and re-explain.
๐Ÿ”€

Not Using Subagents

Symptom: Complex tasks fill your context with research and dead ends. You hit the 20-turn cliff exploring instead of implementing.
Fix: "Use a subagent to research how X currently works." The subagent explores in isolation and returns only conclusions. Main context stays clean.

Quick Recovery Checklist

When things aren't working, run through this checklist:

Check context usage: Run /context. If you're over 70%, use /compact.
Check context usage: Run /statusline to see your context buffer. If you're past 60%, consider starting a fresh session with /clear.
Check for verification: Can Claude test its own work? If not, add tests or a visual feedback loop.
Reduce scope: Is the task too big? Break it into smaller, independently verifiable pieces.
Read the errors: What's actually failing? Sometimes you know the fix and Claude doesn't.
Try planning mode: Use /plan or Shift+Tab twice to enter planning mode. Have Claude propose an approach before executing. Often reveals the root issue.
Revert and restart: If stuck for more than an hour, revert to the last known-good state and try a different approach.
๐Ÿ’ก Git as Your Safety Net

Commit good results before moving to the next step. If something goes wrong, you can revert to a known-good state. Small commits create checkpoints that make complex work recoverable.

๐Ÿ’ก The Nuclear Option

Sometimes the fastest path forward is: revert all changes, /clear, and start the task fresh with lessons learned. Rebuilding from scratch is often cheaper than trying to fix a tangled mess.


Practice Makes Proficient

If you've read this far but haven't been experimenting along the way, now is the time. Open a terminal, start Claude, and try something. The gap between struggling and proficient is usually just practice.

๐ŸŽฏ

Low-stakes experiments

  • Ask Claude to explain code you already understand
  • Have it add comments to a function
  • Request a small refactor you can easily verify
  • Write a test for existing functionality
๐Ÿงช

Safe to fail

  • Work on a branch you can delete
  • Commit before experimenting
  • Keep tasks small and reversible
  • Use /clear liberally between attempts

The "Try This" boxes throughout this guide aren't decorative. Each one is a practical starting point for building the intuition that makes everything else click.


Where to Go Next