Product Management as a discipline has changed a lot lately. Recently, there has been a big shift in the focus of Product teams from outputs to outcomes. Some companies are caring for a little less about the fact that a feature got shipped and a little more about whether that feature had a positive impact on user behavior and metrics. This is a significant development. Shoving out 10 extra features, none of which improve anything, seems like a fairly enormous waste of everybody’s time and money.
Unfortunately, a lot of teams find it hard to understand whether what they’ve built improved anything important. This happens for a lot of reasons, like not having the right metrics, not being given time to measure things, or not knowing what the goal was. Even if they know what improved, a lot of teams can’t tell you if the improvement was worth the effort.
These are all big impediments to being able to measure outcomes and make better choices. By taking a slightly more disciplined approach to planning and review, teams can not only test their work better, they can also improve their future product decisions by identifying places where they’ve consistently made mistakes.
If everybody just did this first step, their products would improve significantly. This is without question the most important thing you can do to make better decisions. Write your goals and expectations before you build.
If everybody wrote their goals and expectations before building their products would improve significantly.
How you want to do this is really up to you. I know of at least a half dozen unique styles for stating the expected outcomes of your feature or product. However you do it, you need to capture a few key pieces of information:
The first thing on the list should be trivial, especially since you shouldn’t be making these until you’re fairly close to ready to work on the additional feature or project. These aren’t lists you make for every single feature you might build. These are well-informed estimates for a project ready to go. If the project requires significant research and/or design work, you will probably want to do a quick version of this before that starts and then update key parts when you have a better idea of what you will be building.
The second and third are tricky, because this is where you lay things out as outcomes and benefits rather than just restating the feature. For example, you can’t say something like “Adding the ability to pay by mobile phone will let users pay by mobile phone.” Explain why that’s a wonderful thing both for the user and the company. Something more like “Adding the ability to pay by mobile phone will allow a significant number of people who can not use our service to use it.”
The third one is even harder, since that’s where you have to explain what “significant” means and how you’ll measure it. Just measuring how many users pay with a mobile phone doesn’t do the trick here. You probably just see how many new users pay that way and whether current users who switch end up spending more or less.
And don’t forget the fourth item! In this example, you’d also need to monitor how many new users still paid the old way and overall sales to make sure you’re not cannibalizing a unique payment method. You also need a method that lets you isolate your changes to make sure that sales didn’t go up for some unrelated reason like a big promotion or a sale on the day you release your new mobile phone payment feature.
Don’t forget the second to last item - it will require what sort of investment to make the change. This doesn’t have to be stated in money. In fact, it’s hard to do that in most companies. But once you’re at the point where you’re ready to build something, have a decent idea of how long it should take and it will involve how many people or teams.
Make sure you’re not just talking about the time to ship something. This should be how long it will take until it’s being actively used by people and you’re seeing value from it. Those two things can be different, especially in B2B environments. If sales are telling you, you’ll get a big new client if you build a new feature, make sure that part of the investment includes educating clients about the new feature and training sales how to sell it, etc. Don’t forget to include any time research and design spent working on this before you had enough info to write everything down and be sure to keep track of further research and design work as you build.
The last item - why you believe what you believe - should be the easiest. What’s driving the decision to build this feature? Was there research that showed there was a huge potential market that couldn’t pay with a credit card? Did a specific person in the company insist that this was a top priority? Did a salesperson say you couldn’t win a big account without it? Write it down! Be honest. “The CEO insisted,” is an acceptable thing to write here, but I encourage you to understand why the CEO fell in love with the feature.
If you do this correctly, over time, you’ll get an impressive view of which sorts of evidence is the most trustworthy and which sources provide the best feature or product ideas. I sometimes have an extra piece of information that I’ll record, “Who disagreed with this feature?” Not everybody is always onboard with every decision. Keep track. Sometimes you see patterns of people who will waste everybody’s time with their “glorious ideas,” and other times you’ll learn who’s needlessly pessimistic about every recent change.
Your post-release review happens as soon as the project is over. Please note that this does not replace regular product or engineering team retrospectives. If you do those, please carry on!
For those of you who loathe all meetings on principle, please remain calm. I’m not adding a huge number of them - just two per project, where projects are defined as a fairly large feature or as an alternative version of a product or something of similar scope. You need not do these for every button you add or piece of text you change.
During this meeting, you will review parts of your list and ask a few important questions:
You will not be assessing whether your fresh thing meets expectations yet because there’s almost never a realistic way to know that this early. All you’re doing is looking at what you expected to build, what you ended up building, how much you thought it would cost (in time/money/opportunity/whatever), and what it ended up costing.
These are important things to test. If you find, as so many teams do, that everything ended up taking twice as long as you expected, that will affect your company outcome., would you have gone after that big new client if you’d known how much it would cost to build the feature they needed? Maybe! But you’ll never know unless you get a fairly accurate view of how long the project took, and this is easiest to do immediately after you think you’re finished.
And now we wait. There are very few companies that can immediately judge whether an extra feature has the impact they expected. All of those companies are big consumer properties with millions of transactions per day (or per second). Even then, there are many features that might require some time to measure - internal tools, features built for a small subset of the customer base, etc.
That’s why in the original list, you need to specify when you think you’ll see the benefit. Do you think it will take a few months to land the big recent customer, even after the feature they wanted is released? Fine, set that date ahead of time. Be generous with yourself, even. But be honest.
If you think you’ll see a benefit in 6 months, check back in 6 months, but don’t keep extending the deadline if that customer still isn’t landed. It’s important for you to understand how long it can take to get the benefits you’re predicting. Hold the meeting, record the truth, and then set up a future date for an optional later retrospective if you think there’s still a chance you’ll get some benefit.
On the appointed day, hold your next retrospective for the project. In this one, you will go through the entire list, including the part you went through before. The questions you are trying to answer are:
If you were off on anything - investment, benefits, side effects, etc. - then you have to ask the most important question: What can we do differently next time to avoid these same mistakes?
The most important question: What can we do differently next time to avoid these same mistakes?
This is the question I don’t hear people asking often enough. They just shrug their shoulders and move to the next thing. Inevitably, they end up underestimating the costs and overestimating the benefits again and again. It’s infuriating.
There is a tendency when we ask questions like, “what went wrong,” to turn the conversation into a blame fest. You can’t do that here, or nobody will be honest, and if nobody’s honest, nobody will learn.
If you blame people when things go wrong, nobody will be honest, and if nobody’s honest, nobody will learn.
These have to be free of blame. It’s not “who made this terrible decision?” The question we’re asking is, “how can we make better decisions?” If you want more info on this, check out the concept of blameless post-mortems in engineering. That’s where I stole it from, anyway.
Another important thing to note when it comes to product decisions and the process is that, while I’ve been describing this as “building a product or feature,” this technique works great for any extensive project or aim. Maybe you’re switching over to an alternative HR system that you think will reduce a specific routine task your team has to do. Or maybe you’re adding a CRM and an unfamiliar process for your sales team. Great! Write it down and do two retros. Make sure you’re making wonderful decisions.
One of the enjoyable things about this method is that you may find the second retrospective is a significant time to ask yourself what you should do next on the project in order to take the product decisions that make a difference. Did it live up to expectations or even better than you imagined? Great! Maybe you should double down. Did it go wild over budget and return nothing? Now’s a wonderful time to figure out a way to fix it or kill it.
It's tough to convince execs to let you iterate on features that are “done.” It can also be easy to let non-performing features linger forever as zombies in your product. This is a fantastic breakpoint that encourages everybody to assess the feature objectively and take the right next step.
These are not meetings that you hold in secret or with only executives. They’re not about judging other people or punishing poor performers. They need to be run by the teams who are doing the work, and ideally, they include any stakeholders or decision makers. If you can’t get everybody actively involved, make sure that they at least see the results, especially if the right next step involves changing some important process.
We should give anybody who can take product decisions the information they need to determine whether their decisions were good. It’s the only way we learn to make better decisions.
You will need to make some changes. The hardest part of this process is not adding extra meetings or writing goals. The hardest part is learning from your mistakes and changing the environment that allowed them to happen.
Every so often, go back over the notes from previous features. Are there patterns? Are there mistakes you’re making repeatedly? Are there “reasons” for building features or products that consistently underperform? Are you always overestimating the return on features and underestimating the cost?
This is where you need to come up with systemic changes, and you can’t just write, “BE SMARTER” because that never works. Trust me. You need to identify where the system went wrong and change it when possible.
This part is hard and probably deserves its own blog post, but there’s lots of good info about this if you look at information about software post-mortems and you can base your product decisions on these, among other things.
And, as with all advice, adapt or change this to suit your team’s needs. No advice is one size fits all, and no set of questions will be perfect for all projects. But all teams can enjoy starting their expectations before starting a project and reviewing specific metrics once we finish the project.
Interested in learning more about product decisions? Check out a version of this in the Hypothesis Tracker section of my book, Build Better Products.
Some (final) thoughts
This article is part of a bigger topic called: