Product Management Concept Questions
This section covers common conceptual questions in PM interviews with structured response frameworks.
Product Sense
Q1: How do you prioritize features?
Frameworks:
| Framework | Formula | Use Case |
|---|---|---|
| RICE | (Reach x Impact x Confidence) / Effort | Quantitative comparison |
| ICE | Impact x Confidence x Ease | Simpler scoring |
| Value vs Effort | 2x2 matrix | Visual prioritization |
Value vs Effort quadrants:
| Quadrant | Characteristics | Action |
|---|---|---|
| Quick wins | High value, low effort | Do first |
| Big bets | High value, high effort | Plan carefully |
| Fill-ins | Low value, low effort | If time permits |
| Money pits | Low value, high effort | Do not do |
Anchor prioritization to: business goals, user needs, strategic fit.
Q2: How do you decide what to build?
Process:
| Step | Action |
|---|---|
| 1 | Understand the goal: What business outcome are we seeking? |
| 2 | Identify user problems: What pain points exist? |
| 3 | Generate solutions: Multiple approaches to solve the problem |
| 4 | Evaluate: Impact, effort, risk, strategic fit |
| 5 | Validate: Test assumptions before building |
Starting point: Always begin with problem identification, not solution.
Q3: What makes a good product?
Criteria:
| Criterion | Definition |
|---|---|
| Solves a real problem | Addresses genuine user need |
| Provides value | Users would miss it if removed |
| Is usable | Users can accomplish goals |
| Is delightful | Creates positive experience beyond functional |
| Is sustainable | Business model works |
Q4: How do you know if a feature is successful?
Success criteria categories (defined before launch):
| Category | Metrics |
|---|---|
| Adoption | % of users who try feature |
| Engagement | Usage frequency and depth |
| Retention | Continued usage over time |
| Business | Revenue, conversion impact |
| Quality | Sentiment, bug rate |
Example (new search feature):
- Adoption: 70% of users try within first week
- Engagement: 3+ searches per session
- Retention: Return rate increases 5%
- Quality: No increase in support tickets
Q5: How do you handle conflicting stakeholder priorities?
Resolution process:
| Step | Action |
|---|---|
| 1 | Understand underlying needs (why do they want this?) |
| 2 | Find common ground (what goals do they share?) |
| 3 | Use data to inform decision |
| 4 | Align on criteria before deciding |
| 5 | Escalate when consensus not possible |
Key principle: Separate positions from interests. Different feature requests may serve the same underlying goal.
Metrics & Analytics
Q6: How would you measure success for [YouTube/Netflix/Spotify]?
Framework: North Star + Supporting Metrics + Guardrails
YouTube example:
| Category | Metrics |
|---|---|
| North Star | Total watch time |
| Supporting | DAU/MAU, watch time per user, videos watched, creator uploads |
| Guardrails | User satisfaction, content quality, creator monetization |
Rationale: Watch time captures value better than views (duration matters) and leads revenue (advertising).
Q7: A key metric dropped 10%. How do you investigate?
Structured debugging:
| Step | Action |
|---|---|
| 1. Verify | Is the data correct? Instrumentation issues? |
| 2. When | When did it start? Correlate with releases/events |
| 3. Who | Which segments affected? |
| 4. What | What related metrics also changed? |
| 5. Why | Form and test hypotheses |
Common causes:
- Bug or deployment
- External event (holiday, competitor, news)
- Seasonality
- Actual user behavior change
Q8: What is the difference between leading and lagging indicators?
| Type | Definition | Examples | Characteristics |
|---|---|---|---|
| Lagging | Outcome metrics | Revenue, churn, NPS | Definitive but slow |
| Leading | Predictive metrics | Engagement, activation, feature usage | Faster iteration |
Example (subscription service):
- Lagging: Monthly revenue, annual retention
- Leading: Feature adoption, session frequency, content consumption
Q9: How do you set OKRs?
Components:
| Component | Characteristics |
|---|---|
| Objectives | Qualitative, inspiring, time-bound |
| Key Results | Quantitative, measurable, stretching (70% achievement = success) |
OKR quality comparison:
| Quality | Objective | Key Result |
|---|---|---|
| Poor | Launch new feature | Ship by Q2 |
| Good | Become go-to tool for team collaboration | Increase weekly active teams from 10K to 25K |
Good OKRs connect to company strategy and focus on outcomes, not outputs.
Q10: When would you not run an A/B test?
| Scenario | Rationale |
|---|---|
| Obvious improvement | Bug fix, legal requirement |
| Insufficient traffic | Cannot reach significance in reasonable time |
| Irreversible | Cannot show different experiences |
| Strategic decisions | Long-term bets not measurable short-term |
| Qualitative changes | Brand, aesthetic (requires different methods) |
A/B tests are for optimization. Innovation often requires other research methods.
Strategy
Q11: How do you approach entering a new market?
Framework:
| Step | Questions |
|---|---|
| Why this market? | Size, growth, fit with core business |
| Who are we serving? | Target segment, their needs |
| What is the landscape? | Competition, substitutes |
| How do we win? | Differentiation, unfair advantages |
| What is the path? | Go-to-market, milestones |
Key questions: What is our right to win? What do we uniquely bring?
Q12: How do you think about competitive moats?
| Moat Type | Description | Example |
|---|---|---|
| Network effects | More users = more value | Social networks |
| Switching costs | Painful to leave | Enterprise software |
| Scale economies | Bigger = cheaper | AWS, manufacturing |
| Brand | Trust and recognition | Apple |
| Data | Proprietary data improves product | Google Search |
| Regulation | Licenses, compliance | Banking |
Evaluation criterion: How defensible? Can competitors copy?
Q13: How do you identify product-market fit?
Positive signals:
| Signal | Description |
|---|---|
| Retention | Users keep coming back |
| Word of mouth | Users tell others (NPS > 40) |
| Organic growth | Growth without paid marketing |
| Clear value prop | Users can articulate why they use it |
| Sean Ellis test | >40% would be "very disappointed" if product went away |
Warning signs:
- Relying entirely on paid acquisition
- High churn
- Users cannot explain value
- No passionate users
Q14: Build vs buy vs partner?
| Option | When to Use |
|---|---|
| Build | Core competency, competitive advantage, have expertise, long-term investment |
| Buy | Not core, exists in market, faster time-to-market, lower ongoing cost |
| Partner | Strategic opportunity, access to users/data, reduce risk, test before commit |
Evaluation questions:
- Is this core to our differentiation?
- Do we have the expertise?
- What is the opportunity cost?
- How does it evolve over time?
Q15: How do you think about pricing?
Pricing lenses:
| Lens | Question |
|---|---|
| Value-based | What is it worth to customers? |
| Cost-based | What does it cost us? |
| Competition-based | What do alternatives cost? |
Pricing models:
| Model | Characteristics |
|---|---|
| Per user/seat | Predictable, scales with value |
| Usage-based | Aligns with value, harder to predict |
| Flat rate | Simple, may leave money on table |
| Freemium | Acquisition tool, conversion challenge |
Execution
Q16: How do you write a good PRD?
PRD sections:
| Section | Content |
|---|---|
| Problem statement | What are we solving and for whom? |
| Goals | Success metrics, definition of done |
| Non-goals | What we are explicitly NOT doing |
| User stories | As a [user], I want [goal], so that [benefit] |
| Requirements | Functional and non-functional |
| Open questions | What we still need to figure out |
Characteristics of good PRDs: Focus on why (not just what), living documents, clear scope, readable by all stakeholders.
Q17: How do you manage technical debt?
Definition: Shortcuts taken to ship faster that create future work.
Management strategies:
| Strategy | Description |
|---|---|
| Track | Maintain backlog of tech debt items |
| Quantify | Measure impact on velocity, bugs, outages |
| Allocate | Reserve ~20% of sprint for debt |
| Prioritize | Address debt blocking key features |
| Prevent | Quality standards, code review |
PM role: Balance feature velocity with platform health. Make trade-offs explicit.
Q18: How do you work with engineering?
Effective partnership characteristics:
| Characteristic | Description |
|---|---|
| Shared context | Engineers understand the why |
| Early involvement | Engineers shape solutions, not just implement |
| Clear prioritization | No ambiguity about what matters |
| Flexibility | How is engineering's domain |
| Trust | No micromanagement |
Anti-patterns:
- Throwing specs over the wall
- Changing requirements mid-sprint
- Ignoring engineering estimates
- Not involving eng in discovery
Q19: How do you handle a feature request from the CEO?
Process:
| Step | Action |
|---|---|
| 1. Listen | Understand the underlying need, not just the request |
| 2. Contextualize | How does this fit with current priorities? |
| 3. Evaluate | Impact, effort, strategic fit |
| 4. Communicate trade-offs | If we do this, what don't we do? |
| 5. Recommend | Have a point of view, defer appropriately |
Approach: Neither automatic agreement nor automatic pushback. Be curious about the why and transparent about implications.
Q20: Describe a product you would kill. Why?
Criteria for killing features/products:
| Criterion | Description |
|---|---|
| Low usage | Few users, declining trend |
| High cost | Maintenance burden, opportunity cost |
| Strategic misfit | Does not align with direction |
| Better alternatives | Users can switch easily |
Process:
- Validate with data
- Understand remaining users
- Provide migration path
- Communicate clearly
- Learn from it