Resourcesโ€บInterview Prepโ€บSystem Design Interviews: A Simple Framework That Works
๐ŸŽฏInterview Prepโ€” System Design Interviews: A Simple Framework That Worksโฑ 8 min

System Design Interviews: A Simple Framework That Works

System design interviews are open-ended and terrifying. This framework gives you a repeatable process that works for almost any system.

๐Ÿ“…January 27, 2026โœTechTwitter.iointerviewssystem-designarchitecturecareer

Why System Design Is Different

Algorithm questions have a correct answer. System design questions don't. You're being evaluated on how you think, not whether you reach a specific solution.

The trap is jumping to technical solutions before understanding the problem. This framework prevents that.


The Framework: 6 Steps

Step 1: Clarify Requirements (5 min)

Never start designing before asking these:

Functional requirements:

  • What are the core features? (What must it do?)
  • What's explicitly out of scope?

Non-functional requirements:

  • Scale: How many users? What's the peak QPS?
  • Latency: P99 response time requirements?
  • Availability: 99.9% vs 99.99%? (3 nines vs 4 nines is 10x more expensive)
  • Consistency: Does every read need to see the latest write?

Constraints:

  • Budget? (affects technology choices)
  • Team size? (affects complexity tolerance)
  • Timeline?

Example for "Design a URL shortener":

  • 100M URLs created per day
  • 10B reads per day (redirects)
  • URLs should be 7 characters
  • Analytics needed? (out of scope for now)
  • Links expire? (no)

Write these on the whiteboard. Refer back to them. This shows structured thinking.


Step 2: Back-of-Envelope Estimates (3 min)

Calculate the scale. This determines your architecture:

URL Shortener example:
- Writes: 100M/day = ~1,200 QPS writes
- Reads: 10B/day = ~115,000 QPS reads
- Read:write ratio = ~100:1 (read-heavy)

Storage:
- 1 URL record โ‰ˆ 500 bytes
- 100M records/day ร— 365 days ร— 500 bytes = ~18 TB/year

This tells you: heavy caching is essential (read-heavy), database writes aren't the challenge.


Step 3: High-Level Design (10 min)

Draw the basic components. No details yet โ€” just boxes and arrows:

Client โ†’ Load Balancer โ†’ App Servers โ†’ Cache (Redis)
                                     โ†“ (cache miss)
                                     Database

Walk through a happy-path request end-to-end:

"A user pastes a URL, it hits the load balancer, routes to an app server, which generates a 7-char hash, stores it in the DB and cache, returns the short URL."


Step 4: Deep Dive (20 min)

Pick 2-3 components to go deep on. Let the interviewer guide you, but have opinions:

Key design decisions to discuss:

Hash generation:

Option A: MD5(long_url) โ†’ take first 7 chars
Problem: collisions, same URL gets same hash
Better: counter-based or UUID + base62 encoding

Database:

SQL (PostgreSQL): Simple, ACID, good for writes
NoSQL (DynamoDB): Better horizontal scaling for reads
Given our read-heavy profile: SQL + aggressive caching

Caching:

Cache the redirect mappings in Redis
TTL = indefinite (short URLs don't expire)
Cache hit rate should be 80-90%+ (hot URLs)
Cache miss: query DB, populate cache, redirect

Step 5: Handle Scale and Failures (10 min)

Ask yourself: "What breaks at 10x traffic?"

  • Single DB: Add read replicas. Shard by hash prefix for writes.
  • Hot URLs: No problem โ€” they're cached. Cold start: first request hits DB.
  • Cache failure: Fall through to DB. Latency spikes but system stays up.
  • App server failure: Load balancer detects via health check, stops routing.
  • What if hash collides?: Retry with a different hash. Or append a counter.

Step 6: Summarize and Discuss Trade-offs

Briefly recap what you built and what you'd do differently with more time:

"We built a URL shortener that handles 115K QPS reads using Redis caching in front of a PostgreSQL database with read replicas. The main trade-off is we're prioritizing availability over strong consistency โ€” a user might hit a stale cache entry for a few seconds after updating a URL. For a URL shortener, that's acceptable. If we needed strong consistency, we'd use a different caching strategy."


Common Systems to Practice

Practice designing these until they're familiar:

SystemCore Challenge
URL shortenerHash generation, caching, redirect speed
Rate limiterToken bucket vs leaky bucket, distributed counters
News feedFan-out on write vs read, cache invalidation
Notification systemPush delivery, reliability, at-least-once vs exactly-once
Typeahead/autocompleteTrie vs inverted index, latency requirements
File storage (Dropbox)Chunking, dedup, sync protocol
Web crawlerBFS, politeness, dedup, storage

What Interviewers Are Looking For

  • Structured thinking โ€” do you follow a process or ramble?
  • Trade-off awareness โ€” do you acknowledge pros/cons of your choices?
  • Scale intuition โ€” do you understand what changes at 100x traffic?
  • Communication โ€” can you explain your design clearly?
  • Receptivity โ€” do you update your design when given new constraints?

Key Takeaways

  1. Clarify requirements first โ€” always
  2. Estimate scale before designing
  3. Draw the high-level design before going deep
  4. Deep dive on 2-3 critical components
  5. Discuss failures and scaling
  6. Acknowledge trade-offs explicitly โ€” "we chose X because of Y, which means we lose Z"