RICE scoring gives product teams a defensible, data-driven method to prioritize features across competing stakeholder demands. It's used by teams at Intercom, Atlassian, and hundreds of B2B SaaS companies. The formula is simple; the discipline to use it consistently is not.
"RICE doesn't make decisions for you — it makes your assumptions explicit so stakeholders can argue about the right things. Before RICE, we were arguing about gut feelings. After, we were arguing about confidence scores. That's progress."
— Valentina M., Director of Product at a Series A B2B SaaS company
The RICE formula
RICE Score = (Reach × Impact × Confidence) ÷ Effort
- Reach: How many users will this impact per month? Use real data — MAU count, not aspirational projections.
- Impact: How much does this affect each user? Scale: 0.25 (minimal), 0.5 (low), 1 (medium), 2 (high), 3 (massive).
- Confidence: How confident are you in your estimates? 100% = high, 80% = medium, 50% = low.
- Effort: Total person-months (PM + engineering + design). 1 month = 1 engineer for 1 month.
RICE scoring example: three features compared
| Feature | Reach | Impact | Confidence | Effort | RICE Score |
|---|---|---|---|---|---|
| CSV export | 2,000 | 1.0 | 80% | 0.5 | 3,200 |
| Team permissions | 800 | 2.0 | 80% | 2.0 | 640 |
| API webhooks | 500 | 3.0 | 50% | 3.0 | 250 |
CSV export wins because high reach and low effort compensate for moderate impact. Webhooks score low despite high impact because reach is limited and effort is high.
The 4 most common RICE scoring mistakes
Inflating Reach. Using total registered users instead of users who would realistically encounter this feature. If a feature only affects power users, your Reach should reflect power user counts, not total MAU.
Using aspirational Impact scores. Every PM wants their feature to score 3 (massive impact). Reserve 3 for features that are demonstrably critical to the core workflow. Use 1 (medium) as your default for features with reasonable but unproven impact.
Setting Confidence at 100% without data. Confidence should reflect evidence quality. 100% = validated by user research + behavioral data. 80% = validated by user interviews. 50% = assumption-based.
Underestimating Effort. PM hours and design time are real effort. Effort = PM months + design months + engineering months. A feature that takes 1 engineering month but 0.5 PM months and 0.5 design months costs 2 person-months total.
When RICE breaks down
RICE works poorly for: (1) strategic bets where Reach is uncertain (new market entry), (2) technical foundation work where Impact is invisible to users, (3) regulatory/compliance requirements that can't be deprioritized regardless of score. Use RICE as a tiebreaker and discussion tool, not an absolute decision engine.