The Science of Lean Product Development: Ash Maurya on Process
Some people say entrepreneurship is more art than science, but that’s likely because applying rigorous scientific methods to developing a business is hard, not ineffective. But in a world where more and more data is easily available to founders, it’s becoming easier to learn and disprove bad hypotheses more quickly and effectively — and to get lost in a haystack of product analytics.
Ash Maurya helps startups apply scientific principles based on the Lean Startup methodology. He is CEO of Spark59 and the author of Running Lean: How to Iterate from Plan A to a Plan that Works, and Scaling Lean: Master the Key Metrics for Startup Growth.
As a product manager at Notion, a tool to help teams communicate around data, I spend a lot of time thinking about how organizations can improve their process and culture by approaching decisions armed with better information. In this interview, Ash gives bang-on advice on how to use Lean Startup principles in a practical, methodical way, so teams can take action and improve iteratively.
More Science, Less Fiction
I first asked Ash about how he thinks about business with a scientific approach, and what that means for entrepreneurs who want to improve their process.
Whenever we’re doing anything new, whether it’s in a startup or in a larger company, there’s just a high level of uncertainty. Getting into “execution mode” is often the wrong mindset because we often don’t know what we don’t know.
The Lean approach really starts off by saying that everything we know about what the business is, or what it’s going to become, is really a guess. So we have to take a more empirical approach and lay out our critical assumptions, prioritize them from high risk to low risk, then systematically tackle them. That’s Lean in a nutshell.
Ash explained that “the scientific method” is really an extension of most people’s basic thought process.
Everyone to some extent, does take some element of the scientific approach, by making a guess about what results their actions will produce. The problem is that we don’t often run very good experiments, because we don’t hold ourselves accountable.
Most people don’t declare outcomes. In many ways, that’s the contrast with what Lean advocates, which is really just leveling up that same thinking. Before doing anything, outline some objectives. Outline some outcomes. Identify the critical assumptions and then be more rigorous about measuring, which is a key thing that most people don’t do. In the end, there should be learning that comes out of it, out of every experiment. That fuels the next thing that we do.
For many teams, getting started with metrics can be a challenge, and sometimes, teams start out with great intentions but end up abandoning a data-informed process along the way. Ash proposed some strategies for tracking KPIs.
I would start off with saying that most people don’t give enough thought to what they’re trying to learn. That often leads them astray.
We often measure things that are easy to measure. The world has really changed where there is no shortage of tools. So you’ll start with something like Google Analytics, and then you’re quickly looking at thousands of numbers but don’t quite know what to do with all of them.
It’s not so much about the numbers, or the quantity of numbers, but rather the insights or the thing we need to learn to take the next action. A lot of people will just go in and install a bunch of tools and pretty soon you’re drowning in a sea of data. You might have Google Analytics and four or five other analytics products or data inputs. Then you just start trying to make sense of it all. That’s obviously not a good approach because it’s really more saying, “the data will give me the answers.” We think it’s just collecting information but it’s not all that simple.
It’s not so much about the numbers, or the quantity of numbers, but rather the insights or the thing we need to learn to take the next action.
Another challenge is that knowing what you want to measure does require a level of rigor. It does require some thinking up front to say, “This is how we’ll measure success for this particular campaign, or this particular feature, or this particular product.”
Many teams, even in very established companies, have challenges knowing what metrics to track and how to pare down their KPIs into actionable data. There might be very specific KPIs for one team or one company versus another in a different vertical or market. But Ash proposes thinking about things a bit differently.
In my most recent book, I throw out this model called The Customer Factory Model. I argue that every business irrespective of vertical, whether you’re B to C or B to B, is in the business of making customers. I do it in a bit of a tongue and cheek manner. When I talk, I ask people, “Is anyone not interested in making more customers?” No hands ever go up. If we agree that we are all in the business of making customers, either adding more customers, or existing businesses, or increasing the value of those customers, then we can almost use some common metrics.
The first insight is that we want to measure customer behavioral metrics above everything else. Things like company KPIs, velocity, sales closes. All of those things are side effects that should add up towards a customer being retained and becoming a profitable customer. We should start with those metrics.
Which Metrics Matter?
There are plenty of models for setting KPIs, and the one Ash mentioned is a favourite with many startups and tech companies, Dave McClure’s Pirate Metrics. This was exciting to me, since I wrote a guide riffing on Pirate Metrics earlier this year, the AARRRT guide, where we take the original Pirate Metrics (Acquisition, Activation, Referral, Retention, and Revenue) and add Team.
Everyone should start with Dave McClure’s Pirate Metrics. I draw them a bit differently, instead of as a funnel, because in that model, we often don’t see the interrelations. Some of them don’t actually occur in a predictable manner, like referrals.
So, given your vertical and given your stage, what user actions map to those five stages? Where you will get the data from? Just start measuring them. Even if you measure them manually in the beginning. Even if it’s coming from five different analytics products that’s okay. You want to just put them all together and get in the habit of measuring those on a weekly basis, ideally.
Once you start to do that, you can begin to benchmark how the business is doing. At that point, you may say, “Well, we are dropping people at this step. Maybe we’re losing people in Acquisition and we don’t know why.” You might have to look at additional data, so you kind of go deeper into the funnel. Or you might have run what I call a “learning experiment.” In this case, you might go and talk to customers, or go observe them, or do usability tests.
Identifying where the bottleneck is in those five metrics should be the starting point for everyone. That’s rather universal in respect of domain. Everyone should be able to map that to their particular vertical.
The Law of Small Numbers
For many startups, a huge challenge may be just collecting enough data to draw conclusions. I asked Ash how he advises companies who are new, or are launching a new product or feature.
The good news is that lots of studies have shown that it doesn’t take a lot of data points to get to some pretty conclusive insights, especially in the beginning when you don’t know a lot about the business because you don’t have that many customers. Going and talking to ten customers and all ten customers saying, “I don’t like your product,” or “I don’t want to buy it and here’s why,” is pretty statistically significant. If all of them say no and give you a reason why, that’s very actionable.
It’s a two-phased approach. We start off qualitative and only when we get enough customers to where we can rely on statistically significant types of measurements do we start shifting more quantitative. Even then, I will often advise people to bring in that element of qualitative because at the end of the day, the data only tells you what’s working or not working but not why.
Even simply going out and doing some qualitative even when you’ve got lots of data can lead to quicker conclusions on why people are behaving the way you are measuring them to behave.
Outcomes are a team sport
In Ash’s new book, he talks about “declaring outcomes as a team sport.” In other words, it’s not enough for leaders to issue top-down directives — this approach loses the collective intelligence of the team, is more subject to confirmation bias, and doesn’t encourage risk-taking.
Coming from a kind of Agile or Scrum mindset, one of the things I describe in the book is the idea of a “Lean Sprint,” comparable to the Design Sprint, or Agile Sprint, or Scrum Sprint, whichever background you come from. In the sprint, teams meet before a defined iteration, which could be anywhere between two and four weeks, the cycle time where we do the Build-Measure-Learn loops.
We start off with some ideas, go run some experiments. We build some things, we measure, and then when we get back and discuss the results.
In the beginning, when the experiments are being defined, the number one goal for the teams is to declare some outcomes up front. For many people, being put on the spot will make them reluctant to commit to an outcome or to make very safe declarations. By making it a team sport, we actually allow everyone on the team to really participate.
A Practical Guide to Less Uncertainty
At Notion, we’ve seen that without clearly defined outcomes, there can be a reluctance for many teams to use data to drive better decisions. One of our primary goals is to help teams focus on the metrics that matter so they can actually change their approach to better serve customers and be more successful. I wondered how the teams Ash works with implement these principles in practice.
The way that plays out is that someone presents an idea for an experiment, and if it sounds promising, everyone takes a sheet of paper or a Post-It Note and writes down their expected outcome.
If the goal is to, for example, increase activation, one person might write, “This experiment will be successful and will result in a 5% increase in activation.” Or they could say, “between a 5 to 8% increase.” Everyone writes down their expected outcomes and then in the meeting before the experiment starts, we’ll look at everyone’s outcomes and have a discussion.
In the beginning, there’s going to be a wide range of opinions and we just want to understand where people are coming from. Why do some people think this is going to be the best experiment ever, and why are others more negative? We’ll talk through that, but we still run the experiment that iteration, if it’s selected.
Once the results are in, we do the exact same thing. Now we’ve got actual data, and we go around and see who was closest. We actually just award a simple token prize every week, so that the person that gets the closest gets a little recognition for being the best at predicting what might have happened.
The real goal of all of this is to improve the judgement of the team. It diffuses the seriousness of having to be right. At the same time, what we have found, is that over time, people’s judgments have gotten better. Those that were way off now begin to understand unconsciously that they’re not going to swing as widely as before.
Over time we find that the ranges become more smaller and the spread between the team members’ guessing actually gets closer and closer, and that’s a win for everyone.
Getting Buy-In from Management
We have often seen data-driven culture emerging within teams, or driven by team leaders, even before the entire company sets clear KPIs. Innovators within a department can start to influence the culture of the whole company by tracking metrics within their team or among different teams in the same product or department. However, some teams run into issues when they try to communicate this approach to management. I wondered if Ash had thoughts around talking to leadership.
Management typically doesn’t care as much about the experiment-level learning. For example, “is this the right price? Is this the right feature? Is there another one? Are we attacking the right problems?”
Often times management is more interested in the traction metrics. “Are we moving in the right direction? Is the business model starting to work? Are we seeing engagement or monetization?” They want to see that bigger story.
I recommend inviting management to the Lean Sprints if they want to come and participate, but it would be a lot like the Agile or Scrum Sprint, more an activity done by the core team.
Outside that, on a monthly to quarterly basis, teams will then get together to create a progress report that’s given to management. There the story is a bit different. It’s not all the experiments that we ran, rather it’s “Here’s what a business model was a month ago; here’s what it is today. We’ve learned a bunch of things and we have taken certain decisions.” It’s more showing the metrics of how the business model has improved, if it has. If it hasn’t improved, what did we learn that we believe will cause future improvements?
There is a different dialogue that happens in that meeting and it’s more at the business model level, more at the strategy level. Are we staying the course? Are we pivoting? The metrics are used to back that up. I often will tell the teams that you can do all the learning in the world, but management or the stakeholders don’t really care about learning, it’s the results that matter. In that meeting, we’re aiming to communicate those results. That same approach works in getting buy-in.
More and more companies are beginning to pay attention to Lean and Lean Startup because they’ve seen it spread. There is a level of curiosity. When I get called in by corporate, I usually advocate picking a project or two that have high visibility and create the right team structure. You have to get that buy-in up front, that we’re going to operate this team in a different manner than usual, simply because innovation will require different sets of metrics because there’s lots of uncertainty.
Turning Goals into Metrics
Before we wrapped up, I wanted to return to the problem of turning goals into metrics. How can teams measure their progress effectively and use the data to create better outcomes
Falsifiability comes from the scientific method. It’s just a fancy way of saying, “Create an outcome that can be specific and measurable.” Often times we’ll come up with statement like, “Because I’m an expert, I can drive more customers to the website.” Or “I can drive more signups,” for instance. That’s an assumption. The problem is that that is a very vague statement. In science, if something is vague, if we can never disprove something, then we can never prove it to be right either.
If I take that statement of being an expert, and driving traffic, I could do a number of things. I could Tweet, I could go and buy ads, I could do a whole bunch of things. As long as I get one more signup then I did the previous week, I could convince myself that I was right and that assumption is validated. In the science world, we call it the induction trap. If we go out studying swans and we observe a hundred white swans, or we observe a thousand, we could declare all swans to be white, but it only takes one black swan to disprove that.
Instead of having this open-ended, “I’m going to see good things happen, then I’m going to declare success,” we actually want to look for something testable and specific.
Instead of just saying, “I’m an expert and will drive signups,” you’ll want to get specific on what actions you’ll take, the time frame, and what measurable outcome you will get. You might instead say, “I’m going to write a blog post, in that blog post I’m going to invite people to sign up, and we’re going to measure that for two weeks. In that two week period, I’m going to get over a hundred signups.”
When you break it down that way, with a time box, a specific, measurable outcome, and a specific repeatable action, it becomes a lot more testable. You take something that starts off as a leap-of-faith or an assumption, and you break it down into something that’s more testable and specific and as a result will be falsifiable as well.
Thanks to Ash for taking the time to talk with me. Read his blog at Leanstack.com for lots of great free resources that can help you define your goals, and check out Notion to help you start communicating and collaborating on your team’s metrics. Check out other interviews on Lean Product and data-driven product management at Mind the Product blog and on Product Coalition.
Would love to hear how you’re using scientific principles in your product or why you don’t… Comment or tweet at me @laurex.