How I use Claude to prep for system design interviews
← Back
April 4, 2026Claude8 min read

How I use Claude to prep for system design interviews

Published April 4, 20268 min read

I used Claude to prepare for senior engineering interviews at three companies over six weeks. My system design scores went from "average" on my first attempt to "strong hire" on the last two. The difference was not cramming more theory — it was practicing structured conversations about tradeoffs with an infinitely patient interviewer who never got bored of re-running the same scenario a different way.

Why standard study resources fall short

Books like "System Design Interview" are excellent references. But reading about designing a URL shortener is not the same as actually explaining it out loud under pressure. The gap is practice — specifically, practice that feels like the real thing: open-ended questions, follow-ups that probe your assumptions, and someone pushing back when you gloss over a tricky tradeoff.

Claude can play the role of an interviewer convincingly. And unlike a mock interview with a friend or colleague, you can do it at 11pm on a Wednesday when the company has a hiring freeze but you still want to stay sharp.

The mock interview prompt

I use this prompt to start a session:

prompt
You are a senior staff engineer at a large tech company conducting a 45-minute system design interview. I am a candidate with 5 years of backend experience.

Interview question: Design a distributed rate limiter that can handle 10 million requests per second across 50 microservices.

Rules:
- Ask clarifying questions naturally, as an interviewer would
- Wait for my answers before proceeding
- Probe my assumptions — if I say "use Redis", ask why and what the tradeoffs are
- Don't give me hints unless I am clearly stuck for more than 2 minutes
- At the end, give me honest feedback: what I did well, what was weak, what a strong candidate would have said that I did not

Start now by asking your first clarifying question.

The result is a genuine back-and-forth. When I said "I would use Redis for storing counters," Claude asked: "Redis is in-memory — what happens to your rate limit state if the Redis node restarts? How does that affect your users?" That is exactly the kind of follow-up a real interviewer would ask. I had not prepared for it. I fumbled the answer. I now have a crisp answer ready.

Drilling specific tradeoffs

After a mock interview, I use Claude to go deep on the areas where I struggled:

prompt
In my mock interview I said I would use a token bucket algorithm for rate limiting. I got pushed on consistency vs availability tradeoffs across distributed nodes.

Explain:
1. Why token bucket is good and where it falls short at scale
2. How sliding window log and sliding window counter compare to token bucket
3. The specific tradeoff between strict consistency (all nodes agree on remaining quota) vs eventual consistency (tolerate brief over-quota requests)
4. A concrete example: if I have 100 nodes and 1000 req/sec per user limit, what goes wrong with each approach?
5. What large companies (Stripe, Twitter, Cloudflare) actually do in production — cite any public engineering blog posts you know about

This gives me a deep-dive briefing I can re-read before the next mock. The fifth point is important — Claude sometimes gets specific details wrong, but it reliably points me towards the right companies to research, and I can verify the specifics on those companies' engineering blogs.

Diagram critique

I draw architecture diagrams using ASCII art or describe them in text, then ask Claude to critique them:

prompt
Here is my proposed architecture for a notification system (10M users, real-time push + email + SMS):

Client -> API Gateway -> Notification Service -> [Kafka topic: notifications]
                                                       |
                              +-----------------------+-----------------------+
                              |                       |                       |
                         Push Worker            Email Worker             SMS Worker
                              |                       |                       |
                         Firebase FCM           SendGrid API            Twilio API
                              |                       |                       |
                         User Devices           User Inbox             User Phone

User preferences stored in Postgres.
Delivery receipts written back to Postgres via the workers.

Critique this design. What will break at scale? What am I missing?

Claude's critique typically covers: the Postgres write bottleneck on delivery receipts at 10M users, missing dead-letter queue handling, no mechanism for notification deduplication, preference lookup on every message (should be cached), and no rate limiting on outbound channels. Each of these is a real interviewer follow-up waiting to happen.

The "what would you do differently" follow-up

After any design session I use this prompt:

prompt
Given the design above, what would a staff engineer with experience at a company that actually runs this at scale change? Be specific about:
1. What they would simplify (over-engineering I have introduced)
2. What they would add (gaps I have left)
3. What they would phase — what gets built in v1 vs v2 vs v3

The phasing question is one of the most common things interviewers probe. "You have designed a perfect system — how would you build it in practice?" Candidates who can articulate a phased rollout with clear v1 scope stand out.

Building a tradeoff vocabulary

System design interviews are fundamentally about communicating tradeoffs clearly. I used Claude to build a vocabulary for common ones:

prompt
For each of the following tradeoffs, give me:
- A one-sentence summary of the tradeoff
- A concrete example of when you would choose each side
- The exact words a senior engineer would use to explain the tradeoff in an interview

Tradeoffs:
1. Consistency vs availability (CAP theorem)
2. Normalisation vs denormalisation
3. Push vs pull for event delivery
4. Synchronous vs asynchronous processing
5. In-process cache vs distributed cache

I kept this response as a cheat sheet and reviewed it before each interview. Having crisp, precise language for tradeoffs impresses interviewers as much as knowing the right answer. Most candidates know that Redis is faster than Postgres. Fewer can articulate precisely why and under what circumstances that performance gap matters.

The six-week schedule

What actually worked for me:

  • Week 1-2: Mock interview every day on a new topic. Record what I got wrong.
  • Week 3-4: Deep dives on my weak areas using the tradeoff drill prompt.
  • Week 5: Mixed sessions — mock interviews with follow-up deep dives in the same session.
  • Week 6: Full mock interviews with no help, rated out of 10 at the end. Stop when consistently hitting 8+.

One session per day, 45-60 minutes each. That is all. The compounding effect of doing this daily — building vocabulary, filling gaps, practicing articulation — is what moves the needle. Reading alone does not.

Share this
← All Posts8 min read