On Oct 21st, I'll start working at Patch. This move from HuffPost to a smaller company is in part my wish to have greater influence on the product process. So what does that look like?
Let's look at this from two perspectives. First the forward flow:
Now, with feedback:
the forward loop can be run really fast, over minutes, and really slow, over weeks (Boyd's thinking suggests the faster tempo you can have the better)
make sure you don't get stuck in a whirlpool of feedback too long so that you never get to testing
because of the ebb and flow between each step and back, the following exploration of each "phase" will be blurred
What = solution and why = problem
Sometimes we think "wouldn't it be cool if this existed" (what) and sometimes we think "this is annoying, could it be better" (why).
The what is the solution/idea/positive space.
The why is the problem/issue/negative space.
(For now, I'm not clear if there is an advantage to thinking about the what or the why first. But that it seems important to always think about both.)
This step can spiral in itself for a while. You might have a moment of inspiration — wouldn't it be nice if Spotify showed play count on songs? Then be explicit about the problem you are trying to solve — I want a playlist of my most listened to songs over the last year. If that's the problem, perhaps we could propose a better solution — wouldn't it be nice if Spotify created a playlist of your most played songs? (They do that.) You see that there can be loops inside the what + why phase!
Before moving on, it is also important that you understand the problem domain/market. Your what + why will improve with the quality of understanding you have about the space you are in. Running this product engine repeatedly will improve your understanding (see the arrow looping back from test to the what + why phase). And, reading/listening to others experiences in this space will help too, call this prior art.
Not all problems are worth solving. Not all solutions are good enough.
You read an article about how successful Company X uses A/B testing to improve their product marketing conversion by 10x! Meanwhile, your onboarding form has 7 fields when you only need the user's email to get started and you see a 0.01% conversation rate for that form. Don't chase after the perfect market language when your onboarding is already turning members away.
The level of resolution that you evaluate value v cost will vary greatly. Many of us will fall into the bad habit of using a greater resolution than our error rates would justify. Two ideas to combat this:
1. Do all calculations in orders of magnitude (multiples of 10).
Example: You have a software idea for small catering services companies. Let's say you can charge $10 a month for it. So each company pays $100 a year for the software (cause in orders of magnitude land there are only 10 months 🤓). Let's also say it'll cost $100,000 to produce. So you'll need to sell the software to 1,000 companies for a year to break even. This is the ballpark you are playing in. Remember you haven't done much research yet so this is as accurate as you can hope for.
(I forget exactly where I encountered orders of magnitude as a trick to do back of the envelope calculations first — likely Fred Fotsch in high school physics. Thanks, Fotsch.)
2. Use three magnitudes: 1, 10, 100
You can determine magnitude of effect by dividing value by cost.
Example: You wanna create an online shop that sells shoes. Solution A - setup a huge warehouse of all types of shoes and sizes and an online shop. Solution B - setup only an online shop and go to the store yourself to buy the shoes for the person ordering the shoes.
Solution A - your guess at the provided value to the user = 100, cost to set it all up = 100, the gives your magnitude of effect to be 100/100 ~= 1
Solution B - same user value = 100, cost to set up a website and NO warehouse = 1, this gives a magnitude of effect to be 100/1 ~= 100
So solution B is clearly more easy to move forward with. And that's how Zappos got started.
Transform your words into something
Build the least amount of prototype that will most validate your idea. (Can you feel the pull between value v cost and test tugging on each side of you?)
They're endless ways to create a product prototype: pen and paper, a short story, a Sketch document, a spreadsheet example, a Framer prototype, etc.
Remember to keep your eyes on the goal. Does it matter if you are using Material Design or the perfect icon set if your idea is not even validated yet?
Put your idea in the boxing ring and see how it does ~ Wait But Why
Example: On Hansel, we were trying to determine what things people would want on their profile. We had some ideas but wanted to know people (in our perceived market) would choose for themselves. Giselle Abinader took our ideas, wrote them on to little pieces of paper ripped into squares (prototype) and asked each of her roommates to create their Hansel profile page with them (test). Amazingly they all asked for something we hadn't even thought of! (If you're curious, they wanted a sell crypto button.)
There are many levels of testing and many pitfalls to watch out for once you start refining your testing — say codifying questions, considering the influence of the testing environment, trying for statistical significance, etc. The only thing more dangerous than running a bias test is running no tests!
no plan survives first contact ~ Helmuth von Moltke the Elder
We're not here to see our ideas win. We're here to learn and grow and provide value to others.
I'm really new to all this. So take all this with a grain of salt
Help me improve the product engine or just say hi on this twitter thread — a poor person's comment section.
Will this product engine only find local maximum? Missing novel solutions? Missing out on products that have no market or create their own market? i.e. iPhone
Who is Cedric to write about this? He's not even used this yet.
At each phase there are many biases to watch out for: in what+why you must make sure you don't favor your idea over others, in value v cost there are a whole host of estimation issues, and etc. Perhaps it would be worth numerating each bias to watch out for in each phase.
Your going to have more than one solution to a problem and want to run multiple ideas at a time. These product engine instances will overlap. How best to compare and incorporate tests from other running loops?
The original images of the product engine are idealized if you include the feedback flow across and within each step it looks like this: