Metrics Diagnosis Questions
This section covers the structured approach to diagnosing metric changes in PM interviews. The framework emphasizes systematic investigation over speculation.
Diagnosis Framework
| Step | Action | Time Allocation |
|---|---|---|
| 1. Clarify | Define the metric precisely | 1 minute |
| 2. Validate | Verify data accuracy | 1 minute |
| 3. Segment | Identify affected populations | 3 minutes |
| 4. Hypothesize | Generate potential causes | 3 minutes |
| 5. Recommend | Prioritize investigation | 2 minutes |
Step 1: Clarify the Metric
Questions to ask before investigating:
| Question | Why It Matters |
|---|---|
| How is the metric defined? | "Likes" could be total, daily, per post, or rate |
| What time frame is being compared? | Day-over-day vs. week-over-week vs. month-over-month |
| Is the change statistically significant? | Small samples have high variance |
| What is the baseline trend? | Sudden change vs. ongoing decline |
| Are related metrics also moving? | DAU drop would explain downstream metric drops |
Example clarification: "Is this a 10% drop in daily likes compared to last week? What is the normal weekly variance?"
Step 2: Validate the Data
Data integrity checks before investigation:
| Potential Issue | Validation Method |
|---|---|
| Logging bug | Check recent code deployments affecting tracking |
| Data pipeline failure | Verify other metrics from same source |
| Definition change | Confirm metric calculation unchanged |
| Bot filtering | Check if bot activity filtering changed |
| Timezone issues | Verify equivalent time period comparison |
Example validation request: "Before assuming this is a real product change, were there any logging changes or pipeline issues this week?"
Step 3: Segment the Problem
Break down the metric to isolate the issue.
Segmentation Dimensions
| Dimension | Segments | Example Finding |
|---|---|---|
| User type | New, casual, power users | Power users drove 60% of decline |
| Platform | iOS, Android, Web | iOS shows -15%, Android shows -5% |
| Geography | By region or country | US down 12%, Asia up 2% |
| Product area | Feed, Stories, Groups | News Feed down 15%, Stories down 5% |
| Acquisition source | Organic, paid, referral | Paid traffic segment declining |
Segment Analysis Table
| Segment | Change | Contribution to Total |
|---|---|---|
| iOS | -15% | 60% |
| Android | -5% | 25% |
| Web | -3% | 15% |
Step 4: Hypothesize Causes
Generate hypotheses after segmentation narrows the scope.
Internal Factors
| Category | Examples | Validation Method |
|---|---|---|
| Product changes | UI update, feature removal, algorithm change | Check release calendar |
| Technical issues | Slow load times, bugs, crashes | Check error logs, performance metrics |
| A/B tests | Experiment with negative impact | Check experiment dashboard |
| Policy changes | Content moderation, spam filtering | Check policy team changelog |
External Factors
| Category | Examples | Validation Method |
|---|---|---|
| Seasonality | Holiday, back-to-school, summer | Compare to same period prior year |
| Competition | New product launch, competitor feature | Monitor competitor activity |
| External events | Outage, news cycle, economy | Check industry-wide impact |
| Platform changes | iOS update, Android permissions | Check device/OS segmentation |
Hypothesis Tree Structure
Step 5: Recommend Investigation
Prioritize investigation based on segment findings.
| Investigation | Priority | Rationale |
|---|---|---|
| Check iOS release calendar | High | iOS had largest segment drop |
| Review algorithm changes | High | Primary product area affected |
| Check app performance on iOS | Medium | Could explain platform-specific decline |
| Compare to same week last year | Medium | Rule out seasonality |
| Check competitor activity | Low | External factor, less actionable |
Example recommendation: "Based on the iOS and News Feed concentration, I would first check if any iOS changes deployed last week, then review News Feed algorithm modifications. If those are clear, I would compare to this time last year for seasonality."
Worked Example: Instagram Story Views Dropped 15%
Clarification Phase
| Question | Answer |
|---|---|
| Metric definition | Total daily views |
| Comparison period | Week-over-week |
| Normal variance | 3-5% |
| Significance | 15% is outside normal range |
Validation Phase
Data confirmed clean, no known tracking issues.
Segmentation Phase
Segmentation reveals:
- New users on Android in India show largest decline
- Other segments relatively stable
Hypothesis Phase
Given segment concentration (new Android users in India):
- Product change: Onboarding flow modification
- Technical: Android-specific bugs or performance issues
- External: Competing app launch in India, data pricing changes
- Seasonality: Indian holidays affecting usage
Recommendation Phase
Primary investigation: Android release history for onboarding changes.
Finding: Android onboarding flow updated last week.
Resolution:
- A/B test reverting onboarding change
- Check completion rates for new vs. old flow
- Fix or rollback based on results
Quick Reference: Common Scenarios
| Scenario | Key Investigation Angles |
|---|---|
| DAU dropped 5% | User acquisition vs. retention, platform, geography |
| Conversion rate decreased | Funnel stage breakdown, traffic source, device type |
| Revenue per user down | Pricing changes, product mix, user segment |
| Session length increased | Determine if engagement (positive) or confusion (negative) |
| App uninstalls spiked | Recent update, new permission request, storage issues |
| Search queries dropped | Fewer searches needed (positive) or less engagement (negative) |
Metric Trees for Diagnosis
Revenue Decomposition
DAU Decomposition
Counter-Intuitive Cases
When a Drop May Be Positive
| Scenario | Positive Interpretation |
|---|---|
| Support tickets dropped | Product became easier to use |
| Search queries dropped | Users found content faster |
| Time in checkout dropped | Checkout flow improved |
| Feature usage dropped | Users found more efficient path |
False Positives
| Scenario | Apparent Signal | Reality |
|---|---|---|
| Traffic dropped | Fewer users | Bot traffic removed |
| Conversions dropped | Worse product | Stricter fraud filtering |
| Engagement dropped | Less interest | Healthier usage patterns |
Interview Response Criteria
| Criterion | Demonstration |
|---|---|
| Systematic approach | Follow framework steps in order |
| Quantify impact | "If this segment is 20% of users and dropped 30%, that explains 6% of our 10% total drop" |
| Correlation vs. causation | "The drop coincides with our redesign, but we need an A/B test to confirm causation" |
| Clear conclusion | "Based on the iOS concentration and release timing, the most likely cause is our iOS update" |
| Knowledge of scale | Familiarity with order-of-magnitude metrics for major products |