Product Sense Interviews
Product sense interviews evaluate structured thinking, user empathy, and decision-making ability. The interviewer assesses the reasoning process, not the specific solution proposed.
Answer Structure
Product sense answers follow a consistent structure:
| Step | Duration | Purpose |
|---|---|---|
| Clarifying questions | 2-3 min | Avoid assumptions, scope the problem |
| User selection | 2-3 min | Define the target user segment |
| Problem identification | 3-4 min | Identify the underlying pain point |
| Solution brainstorming | 5-7 min | Generate 3-5 distinct solutions |
| Solution selection | 3-4 min | Choose one solution with clear rationale |
| Success metrics | 2-3 min | Define how to measure success |
Total time: 20-25 minutes.
Step 1: Clarifying Questions
Ask 2-3 questions to establish scope and context. Avoid excessive questioning.
Example clarifying questions:
- Platform focus (mobile, web, or both)
- Geographic scope
- Target user demographic
- Business goals (growth, retention, revenue)
Step 2: User Selection
Select a specific user segment. Avoid designing for "everyone."
| Weak User Definition | Strong User Definition |
|---|---|
| "Busy professionals" | "Working parents with children under 5 who lack reliable childcare" |
| "Young people" | "College students living off-campus without personal transportation" |
| "Users who want to save money" | "First-time homebuyers with $50K-$100K household income" |
Selection criteria for user segment:
- Segment with the most acute problem
- Segment the interviewer's company likely prioritizes
- Segment where you have domain knowledge
Step 3: Problem Identification
Identify the underlying pain point, not a feature request.
| Surface Request | Underlying Problem |
|---|---|
| "Users want faster load times" | Users abandon tasks when perceived wait exceeds expectations |
| "Users want more filters" | Users cannot find relevant content efficiently |
| "Users want notifications" | Users miss time-sensitive information |
Step 4: Solution Brainstorming
Generate 3-5 solutions that address the identified problem. Solutions should be distinct, not variations of the same approach.
Step 5: Solution Selection
Select one solution and provide clear rationale. State the decision explicitly.
| Weak Conclusion | Strong Conclusion |
|---|---|
| "It depends on company priorities" | "I recommend Solution A because it directly addresses the core problem with lowest implementation complexity" |
| "All of these could work" | "Solution B is my recommendation. It differentiates from competitors and aligns with the company's mobile-first strategy" |
Step 6: Success Metrics
Define 1-2 metrics that capture whether the solution achieved its goal.
| Solution Type | Primary Metric | Secondary Metric |
|---|---|---|
| Engagement feature | Weekly active users | Sessions per user |
| Conversion improvement | Conversion rate | Time to conversion |
| Retention feature | Day 7/30 retention | Churn rate |
Common Mistakes
| Mistake | Description |
|---|---|
| Jumping to solutions | Proposing features before establishing user and problem |
| Generic users | Designing for "users" instead of a specific segment |
| No decision | Listing options without selecting one |
| Feature list | Proposing many features instead of 2-3 thoughtful solutions |
| Scope creep | Attempting to design the complete product instead of v0.1 |
Question Types
"Design a product for X"
Build from scratch. Prioritize user and problem definition before features.
"Improve Y"
Improve an existing product. Analyze current problems before adding features. Consider removing features or fixing flows.
"Favorite product"
Select a product with specific, defensible opinions. Avoid generic selections (Spotify, Notion) with generic praise.
"Should company X do Y?"
Strategy question format. Frame around company goals, user needs, and competitive position.
Worked Example
Question: "Design a product to help people learn a new language."
Clarifying questions:
- Platform focus? (Mobile app)
- Specific language context? (Interviewer's choice)
User selection: "Working professionals learning a language for career advancement. Motivated but time-constrained, with approximately 15 minutes available during commute. High commitment but competing priorities."
Problem identification: "The core problem is consistency maintenance. Existing apps feel like obligations. Missing a day creates guilt that leads to app abandonment. The failure mode is not learning difficulty, but life interference without recovery mechanisms."
Solutions:
- Context-relevant lessons - Learning tied to real work situations rather than abstract exercises
- Flexible streaks - User-defined cadence (3x per week) instead of mandatory daily use
- Passive audio content - 5-minute morning briefings for low-effort progress
- AI conversation practice - Real conversation practice on work-relevant topics
Recommendation: "Context-relevant lessons. This approach differentiates from competitors, directly addresses the motivation problem, and is technically feasible. Flexible streaks is low effort and should be included. Audio briefings are a v2 consideration."
Metrics:
- Primary: Weekly active learners (matches target cadence)
- Secondary: Lessons completed per user per week
- Guardrail: Week 4 churn rate
Company-Specific Product Approaches
Stripe
Focus on developer experience. Product decisions prioritize minimal friction over feature breadth. API simplicity is the product.
Airbnb
Uses "11-star experience" brainstorming. Escalating experience quality (5-star, 7-star, 11-star) reveals underlying user needs rather than stated feature requests.
Amazon
Works backwards from press release. Product proposals begin with the customer announcement, not the implementation plan. Boring press releases indicate weak product concepts.
Spotify
Jobs-to-be-done framework. Users hire Spotify for specific tasks: focus during work, energy during exercise, mood regulation. Discover Weekly succeeded by addressing "find new music I like without effort."
Instagram (historically)
Single core action focus. Every decision evaluated against "does this make posting a photo easier or harder?" Features conflicting with the core loop were eliminated.