In my last post, I talked about how we are all making these MVPs too damn big.
As I was writing that post, I was flipping through some real-world examples I’ve come across.
As I was doing that, I realized there was another thing I needed to talk about to complete the thought: crappy assumptions.
Anything you are worried about or unsure of is definitely something you want to identify on your assumption map. It’s good to document any and all of these raw thoughts.
If you don’t take this next step with your assumptions, you run a high risk of making crappy assumptions.
After you complete your assumption map and move all your Post-It notes around, you are going to identify your riskiest assumption or your LOFA.
This is the exact point where gold turns into a turd.
Often these assumptions as formed on the map are at a high level. That’s fine for the purposes of the assumption map, but you need to get much more specific about your assumptions before inputting them into an experiment.
Let me give an example.
I was talking to someone the other day who wants to sell a B2C product online using a lot of automation in the marketing, selling, and distribution of the product.
The actual product itself isn’t all that novel, but the market is plenty big for multiple competitors to make a nice living. I think they’re onto something.
They’re ready to put together this MVP, and of course, it’s too damn big. I got into a discussion about the build-out and could sense this was going to be a circular conversation.
Every suggestion I had to cut back on the MVP was met with resistance on how it affected the learning needed.
I’m holding fast that building everything out is a red flag, so I thought about it.
After a bit of quiet time, I realized that I had skipped over one of the most important parts of creating a good experiment: a thorough vetting of the assumption.
Break it down
The problem with this team’s experiment – and by extension the MVP – was the assumption was still in the form it had been on the original Post-it note.
The assumption was “people will pay for it”. If you knew the product – which I am withholding to protect the innocent – I’m guessing you wouldn’t have made this your LOFA, but it was their call.
The problem is when your LOFA is that broad, you have no choice other than to build out the whole damn thing.
It wasn’t that they did their assumption map incorrectly. What they failed to do (and I failed to recognize in the beginning) was to insist on breaking that assumption up into smaller parts onto its own little assumption maps.
Let me walk you through a possible way to break this down.
If your LOFA is “people will buy it” then you are really asking several questions.
Can I find these customers?
Can I present this product in a way they understand?
Can they afford it?
Can they (logistically) pay for it?
Can I get the product to them in a way/time they accept?
They list and go on and on, but you can see that the big “people will buy it” is really an assumption with maybe a dozen variables, each its own assumption.
Ok. So what?
So what the hell am I saying here?
Be as specific as possible when testing assumptions. Limit your variables. Strive for repeatability. You want to understand the exact conditions under which your customer will engage you.
I know this means you have a bunch of experiments to run, instead of your big mega-experiment, which is just the building of your thing – not an experiment at all.
Having a backlog of experiments is a good pressure to keep your experiments minimal and your timelines tight.
The narrow-scope, well-formed experiments are far superior. By concretely knowing what has been tested, you know exactly what you have learned. And by limiting your variables, you create firmer ground by which you can run further experiments.
So, next time you pull an assumption off your map, spend a few minutes to break it down even further to explore the riskiest underlying assumption or variable, then build your MVP from there.