Stop prompting Claude Code - let it interview you

Help me help YOU
1.3+ million views. 10,000+ bookmarks. In three days.
That's what happened when Thariq, an Anthropic engineer working on Claude Code, shared a single technique. The replies were full of people calling it a "game changer" and a "superpower".
The technique is deceptively simple: instead of telling Claude what to build, you let Claude interview YOU first.
The problem
You give AI a prompt, it builds something, and it's not quite right. Maybe it missed an edge case. Maybe it made assumptions you didn't want. Or maybe it completely misunderstood and went in the wrong direction.
So you go back and forth, hoping that this next iteration will be the final fix.
The issue is ambiguity. When requirements are vague, the model fills in the blanks - and it'll fill them in differently than you would.
The solution: flip it
Thariq's technique flips this completely. Instead of you prompting the AI, the AI prompts you.
Here's the prompt:
Read @SPEC file and interview me in detail using the AskUserQuestionTool about literally anything. Be very in-depth and continue interviewing me until it's complete. Then write the spec to the file.
And here's the workflow:
- Start with a minimal spec - even one sentence works
- Claude interviews you with probing questions
- After 10, 20, sometimes 40+ questions, you have a detailed spec YOU control
- Start a fresh session and tell Claude to implement the spec
That last step matters. Starting fresh resets context so Claude doesn't carry over any biases from the interview.
In practice
I started with the most minimal spec possible. One sentence in a file called SPEC.md:
Build a CLI tool to save and search bookmarks.
That's it. Then I pasted the prompt and let Claude interview me.
Claude didn't just start building. It started asking questions. What do we do with stale links? What's the search scope? What do we do with duplicates? And so on.
32 questions later, I had a 157-line spec file.
It covered the tech stack, data model, commands and how to use them, output format, storage format, what happens on first run, and even non-goals that were deliberately excluded.
This took me a few minutes to answer. Writing it myself would have taken a couple of hours - first thinking through all the edge cases, then documenting them. And I still would have missed things.
Then I started a fresh Claude Code session and typed: Implement @SPEC.md
One shot. About five minutes later, I had a working bm CLI tool. bm add, bm list - everything worked exactly as specified.
Why it works
The back and forth forces you to clarify your thinking before any code gets written.
Claude asked me 32 questions for the bookmark manager. Some I expected - it was my idea after all. But maybe a third of them were either more detailed than I was thinking about, or genuinely surprising and actually good.
For complex features, you might get 40 to 60 questions. I usually get between 20 and 30 on average. That sounds like a lot, but you're front-loading all the thinking. No more "oh wait, what about..." after you've already built half the feature and need to re-architect everything.
Making it better
Once you've used this a few times, there are ways to improve it.
Add anti-goals. Tell Claude what NOT to do so it doesn't go down rabbit holes. "Don't add authentication. Don't over-engineer the database. I'm the only user."
Sometimes skip it entirely. If you already have clear requirements, or it's a small change, or you know exactly what you want - just build it. Don't use this process. This technique shines when there's ambiguity. When there's no ambiguity, it's overkill.
Turn it into a slash command
To make this even easier, turn it into a slash command you can use anytime in any project.
Create a file at ~/.claude/commands/interview.md:
Now you can just type /interview SPEC.md and it works. One command. Interview mode. Available in all your projects.
Try it
Try this on your next feature. Even a small one. I've been using it on 90% of my tasks since first trying it.
