Execution Questions
Execution interviews assess prioritization ability, roadmap construction, and trade-off decision making. The core question is not "what should we build?" but "what should we build first, and what are we not building?"
Prioritization Frameworks
RICE
RICE provides a numerical score for comparison.
| Factor | Description | Scale |
|---|---|---|
| Reach | Users affected | Count |
| Impact | Effect per user | 0.25-3 |
| Confidence | Certainty level | Percentage |
| Effort | Engineering time | Person-months |
Formula: Score = (Reach × Impact × Confidence) / Effort
Example: Feature affecting 10,000 users, high impact (2), 80% confidence, 3 engineer-months.
Score = (10,000 × 2 × 0.8) / 3 = 5,333
The score informs discussion but does not make decisions automatically. Similar scores require additional judgment on strategic fit and dependencies.
Value vs. Effort Matrix
| Quadrant | Action |
|---|---|
| High value, low effort | Execute immediately |
| High value, high effort | Plan carefully, break into phases |
| Low value, low effort | Consider if spare capacity exists |
| Low value, high effort | Do not pursue |
MoSCoW
| Category | Definition |
|---|---|
| Must have | Launch requirement |
| Should have | Important but not blocking |
| Could have | Desired if resources permit |
| Won't have | Explicitly excluded from scope |
The "Won't have" list prevents scope creep by documenting what is excluded.
Prioritization Process
Prioritization fundamentally requires saying no. Any PM can say yes; the job is selecting 3 items from 15 reasonable requests.
Prioritization Answer Structure
- Clarify company goals (growth, revenue, retention)
- State evaluation criteria (impact, effort, strategic alignment)
- Rank items with rationale
- Explicitly state what is not being done and why
The final component distinguishes adequate answers from strong answers.
Roadmaps
Definition
A roadmap is a communication tool showing direction and path, not a feature list with dates.
Roadmap Characteristics
| Element | Description |
|---|---|
| Themes | "Q2: Improve onboarding" not "Q2: Add tutorial, fix signup form" |
| Variable certainty | Near-term concrete, far-term directional |
| Goal connection | Every theme links to company objectives |
Now-Next-Later Format
| Now | Next | Later |
|---|---|---|
| Specific, committed features | Planned themes, high confidence | Directional, subject to change |
This format acknowledges uncertainty. Promising specific features 6 months out misrepresents certainty.
Theme-Based Organization
| Theme | Initiatives |
|---|---|
| Growth | Referral program, onboarding improvements |
| Engagement | Push notifications, recommendations |
| Monetization | Premium tier, upsells |
Theme-based organization enables priority communication.
Trade-Off Questions
"Should we ship now with bugs or wait?"
Evaluation Framework
- Cost of shipping now (user experience, trust, support load)
- Cost of waiting (market timing, opportunity cost, user expectations)
- Middle options (partial rollout, feature flag, limited beta)
Example Answer
"It depends on severity. Cosmetic bugs (misaligned button): ship. Functional bugs (checkout failing for some users): wait.
Competitive pressure matters. If a competitor launches the same feature next week, cosmetic bugs may be acceptable for immediate deployment with fast follow-up fixes.
A middle option: ship to 10% of users behind a feature flag, monitor, and fix bugs before wider rollout."
Trade-off questions require weighing factors, not selecting sides immediately.
Interview Questions
"Engagement dropped 10% this week. What do you do?"
This is a debugging question, not a solution question.
Process:
- Verify data - Is measurement correct? Is it statistically significant?
- Identify obvious causes - Releases, competitor launches, holidays
- Segment - All users or specific segments? Platform-specific?
- Form hypotheses - Based on segmentation findings
- Investigate and confirm
If a release caused the issue, roll back. If seasonal, it may be expected. If segment-specific, investigate what changed for that segment.
"How would you prioritize these 5 features?"
Answer structure:
- State framework: "I'll evaluate based on impact on [goal], effort, and strategic alignment."
- Brief assessment of each feature
- Clear stack rank with reasoning
- State what is deprioritized: "I'd defer feature 5 this quarter because it's high effort and doesn't directly support retention. Worth revisiting when priorities shift to growth."
"Build a 6-month roadmap for Instagram."
Think in themes, not features.
"I'd structure around Instagram's priorities: creator retention, user engagement, monetization.
Q1: Creator tools focus. Better analytics, scheduling, monetization options. Creators drive content supply, which drives everything else.
Q2: Discovery focus. Algorithm improvements, explore features. Help users find content they engage with.
I'm deprioritizing shopping integrations this half. High effort and Meta's commerce push hasn't demonstrated strong product-market fit."
The interviewer does not expect knowledge of Instagram's actual roadmap. They evaluate strategic thinking and prioritization clarity.
Common Mistakes
| Mistake | Description |
|---|---|
| "All high priority" | If everything is high priority, nothing is |
| No framework | Use some structure, even if simple |
| Ignoring effort | 2x value but 10x effort may not justify |
| Far-future specificity | Acknowledge uncertainty in distant plans |
| Missing trade-offs | State what is not being done |
Company Approaches
Amazon PR/FAQ
Before building, teams write a hypothetical press release announcing the finished product plus FAQ. If the press release lacks clear value, the product may not be worth building.
Spotify Squad Model
Autonomous teams with roadmap ownership. Benefits: teams closest to problems make decisions. Challenges: cross-cutting initiatives require coordination. Pure bottom-up prioritization does not scale; some top-down alignment is required.
Apple
Fewer things, better quality. iPhone launched without copy/paste, multitasking, or App Store. Minimum viable belief, then iteration. Willingness to say "not yet" to obvious features.
Meta (Facebook)
Ship fast, measure everything, iterate. Features launched at 50% completion, improved based on data. Effective during growth stage; less appropriate at scale with billions of dependents.
Intercom
Public roadmap with transparent prioritization process. RICE-like framework used for structured discussion, not automated decision-making. Transparency builds trust even when customers don't receive requested features.