RAT is a sharper version of MVP thinking. Instead of building the smallest product, you isolate the one assumption that would kill your product if it's wrong — and test only that.
"Anything not validated by data or interviews is an assumption." — from MVP notes
That's the starting point. Everything your product is built on that hasn't been validated is an assumption. List them all. Then ask: which one, if wrong, would make everything else irrelevant?
That's your riskiest assumption. Test it first, cheaply, before committing resources to building.
The RAT process:
- List every assumption the product depends on
- Rate each by: what's the damage if this is wrong?
- Design the cheapest possible test — a customer interview, a fake landing page, a concierge MVP, a shadow button
- Set a success threshold before you start (not after)
- Run it, measure it, decide
The difference between RAT and MVP: an MVP can still be a product. A RAT experiment often doesn't need to be a product at all — it just needs to generate the right signal.
I've used this most when a stakeholder was certain about a user behavior that I was skeptical of. Instead of building a feature to "find out," a short round of user interviews or a fake door test answered the question in a week.
Exam tip: PSPO II scenarios often test whether the PO can identify the most valuable experiment before committing to a full sprint of building. RAT is the concept behind those questions.
