Often, it either generates convoluted implementations when simpler ones clearly exist, or it produces code that's riddled with bugs — despite confidently claiming it's correct. I'm wondering if I'm just not using it properly.
Here's my current workflow:
- I first talk to Gemini to gradually clarify and refine my requirements and design.
- I ask Gemini to summarize everything. Then I review and revise that summary.
- I paste the final version into Claude Code, use plan mode, and ask it to generate an implementation plan.
- I review the plan, make adjustments, and then let Claude Code execute it.
- Long wait…
- Review Claude’s output and clean up the mess.
For refactors and bugfixes, I usually write some tests in advance. But for new features, I often don’t.
It often feels like opening a loot box — 50% of the time it does a decent job, the other 50% is pretty bad. I really want to understand how to use it properly to achieve the kind of magical experience people describe.
Also, I’m on the Pro plan, and I rarely hit the rate limit — mainly because there’s a lot of prep work and post-processing I need to do manually. I’m curious about those who do hit rate limits quickly: are you running lots of tasks in parallel? Machines can easily parallelize, sure — but I don’t know how to make myself work in parallel like that.
1. Be uncomfortably explicit in prompts: Claude Code in particular is very sensitive to ambiguity. When I write a prompt, I’ll often:
Specify coding style, performance constraints, and even “avoid X library” if needed.
Give sample input/output (even hand-written).
Explicitly state: “Prefer simplicity and readability over cleverness.”
2. Break down problems more than feels necessary: If I give Claude a 5-step plan and ask for code for the whole thing, it often stumbles. But if I ask for one function at a time, or have it generate stub functions first, then fill in each one, the output is much more solid.
3. Always get it to generate unit tests (and run them immediately): I now habitually ask: "Write code that does X. Then, write at least 3 edge-case unit tests." Even if the code needs cleanup, the tests usually expose the gaps.
4. Plan mode can work, but human-tighten the plan first: I’ve found Claude’s “plan” sometimes overestimates its own reasoning ability. After it makes a plan, I’ll review and adjust before asking for code generation. Shorter, concrete steps help.
5. Use “summarize” and “explain” after code generation: If I get a weird/hard-to-read output, I’ll paste it back and ask “Explain this block, step by step.” That helps catch misunderstandings early.
Re: Parallelization and rate limits: I suspect most rate-limit hitters are power-users running multiple agents/tools at once, or scripting API calls. I’m in the same boat as you — the limiting factor is usually review/rework time, not the API.
Last tip: I keep a running doc of prompts that work well and bad habits to avoid. When I start to see spurious/overly complex output, it’s nearly always because I gave unclear requirements or tried to do too much in one message.
I run tasks in parallel and definitely hit the rate limits.
Do I need to explicitly tell Claude to read CLAUDE.md at the start of every session for it to consistently follow those preferences?
Claude successfully makes code edits for me 90% of the time. My two biggest pieces of advice blind are
1. Break down your task into smaller chunks - 30 minutes worth of human coding max. 2. On larger code bases git hints on what files to edit.
besides this: I have great results in combination with https://github.com/BeehiveInnovations/zen-mcp-server but ymmv of course and it requires also o3 and gemini api keys but the token usage is really low and the workflow works really great if properly used
Are you using git? as in "git checkout ." or "git checkout -b claude-trial-run"