Asking the right questions: Ben Yoskovitz on Lean Analytics

Laure X Cast
Product Coalition
Published in
10 min readApr 5, 2017

--

How do you know when you’ve made a good decision? Sometimes, it’s obvious, but more often, decisions are made based on gut instincts and the results of each decision are unclear. The Lean Startup encourages a “measure and learn” step that can help decisions be more justified and repeatable, and allow your company or product to get traction with customers more quickly.

Lean Analytics helps you nail the “Measure” step in Lean Startup.

Ben Yoskovitz has been an entrepreneur in the “web space” for over 20 years, and is the co-author, along with Alistair Croll, of Lean Analytics, perhaps the most influential book about analytics focused on startups. Most recently, Ben co-founded a new company, Highline BETA, a startup co-creation company that launches new ventures with leading corporations and founders. Previously, he was VP Product at VarageSale, VP Product at GoInstant (acquired by Salesforce in 2012), and CEO/Co-Founder of Standout Jobs, which he sold in 2010. He was a Founding Partner at Year One Labs, an early-stage accelerator, and remains an active angel investor.

At Notion, we are big fans of the Lean Analytics framework, and have been influenced strongly by the concepts developed by Ben and Alistair. Ben was kind enough to answer some of the questions we thought would be most interesting to anyone on a team looking to become more data-driven.

I first asked Ben how he became interested in metrics and analytics.

There were a couple “aha!” moments for me around metrics and analytics. The first was when I read Four Steps to the Epiphany by Steve Blank. The book focuses on customer development for B2B enterprise startups/companies. I remember thinking to myself, “Steve wrote this book just for me, because every mistake he describes, I had made.” It was a very eye-opening experience for me.

The second was when I co-founded and launched Year One Labs, an early stage accelerator. We invested in 5 companies, and put them through a Lean Startup methodology. It was during this time when I saw just how difficult it is to implement Lean principles effectively; specifically around the iterative cycle of Build->Measure->Learn.

Teams struggle with what to measure, why and when — and the cycles became slow and frustrating for everyone.

I realized at that moment that we needed to dig much deeper into Lean Startup around analytics and help people figure out what they should track, why and when, so that they could apply the Lean principles effectively and build better products/startups.

Tracking KPIs in Notion

In our work with companies trying to implement data-driven practices at the team level, we find again and again that companies struggle to identify the “right” things to track that will meet the model set out for “good metrics” in Lean Analytics, specifically that they be comparable, understandable, and actionable.

As teams have access to more and more data just by using software tools to monitor and manage process and customer relationships, it can be difficult to focus on what’s important. Ben told me that in his experience, the biggest challenge is focus, and figuring out what data matters.

This comes down to understanding what drives your business, and what problems you’ve identified that you need to solve. When you identify a meaningful enough problem to solve, then you can figure out what metric (or metrics) you need to track to measure the problem and any solutions you implement to solve the problem.

Data is only useful if you know what questions to ask of the data, and/or if you know what data is indicative of meaningful behaviours/actions your users are taking. You have to tie the data that matters to value creation for users/customers — then you know what to focus on.

Data is only useful if you know what questions to ask of the data.

Comparing metrics approaches: image sourced from growhack.com

In Lean Analytics, Ben and Alistair outline unique key metrics at different company stages that drive a general concept called “The One Metric,” i.e. one KPI that should drive the activity and goals of the business in that stage and company vertical. I wondered how Ben sees the “One Metric” in the context of guiding different teams in a company, like marketing, engineering and customer success.

I’m not a fan of thinking about companies as groups of functional silos — marketing, engineering, customer success, etc. Truth is, those teams should be sharing metrics of success because the metrics should be focused on value creation to the end user/customer.

There may be some metrics that are focused by function (e.g. customer success may care about retention/up-sells; engineering may care about CPU optimization or performance) but those metrics have to “bubble up” into higher level metrics that are indicative of value creation.

If improving performance, as an example, doesn’t lead to more customer value, then is it really worth focusing on? I think of it like a layer cake — where you have key metrics on individual projects that ideally correlate to higher level metrics (maybe at a departmental or major initiative level) that correlate up to an overall business health indicator.

This line of thinking naturally makes me wonder about the accountability aspect of using metrics and KPIs. So often, when companies set global metrics, there can be a “failure of the commons,” where everyone knows the goals, but they assume someone else is solving the problem, or focusing on the solution. As product people, we may feel a sense of responsibility for our company’s success, but we might think “driving that metric is owned by another department, for example, we might think the SaaS-Virality metric Word of Mouth is really something our marketing department is solely responsible.

Ben refutes that and makes it clear that the “One Metric” should be the starting point for everyone’s work. If you are in a SaaS company in the Virality stage, then everyone’s work should contribute to that goal, and in adding value for your users.

Image from “Lean Analytics for Intrapreneurs”

I asked Ben what to do if you are a PM who understands this concept, but works in a top-down culture where the roadmap is dictated by the executives.

If the approach to product/feature development is very top-down, then it’s unlikely that the people “at the top” are going to focus on metrics that matter. If they did, they would be inclined to focus on a bottom-up approach to product/feature development (the “bottom” in this case being the users/customers and those that are closest to them.) So I don’t think these two things work well together — you can’t force in the use of metrics successfully with a culture that’s not designed to leverage or accept them.

You can’t force in the use of metrics successfully with a culture that’s not designed to leverage or accept them.

This made me wonder if big companies, which are typically more slow-moving and hierarchical, use lean metrics to drive their product development.

It’s definitely not easy to change a big company’s culture (it’s hard to change a small company’s culture too!) I see big companies doing a lot of things, and it takes time and lots of effort in a big company to make a real difference.

Generally, I think big companies are reflecting more seriously and honestly about the mistakes they’ve made in the past, where they used opinion above all else to make big investments and decisions. They’re realizing that with the data they had, they could have made more informed and ultimately better choices. In terms of how to implement a data-driven culture, companies are trying a variety of things, from training, to more testing, to hackathons (to prove that things can be built quickly, which is a key “aha!” moment for them), to internal accelerators and more.

It’s important to note that it’s not just about using more data — it’s about how that data is used, which is why I think it has to come with implementing more Lean methodologies around product and new venture development. Big companies can start small — on a few “side” projects — prove that the methodology works and then scale from there.

For startups, on the other hand, one of the biggest focus areas is often achieving “product-market fit,” something I’ve also gotten great advice about from Dan Olsen, author of The Lean Product Playbook, and Ash Maurya, author of Running Lean.

Startups are looking for the signal that they’ve achieved product-market fit, and I asked Ben how that might look for companies using Lean Analytics. For example, if a company has nailed the “Empathy Phase,” which is defined as knowing you are solving a problem people will pay for, would he define that as having achieved Product-Market fit?

I don’t think you get to Product/Market Fit right after the “Empathy Stage.” After Empathy, the next stage is Stickiness, which is about proving that you’re solving a problem in a meaningful way for early adopters. You’re tracking usage data of some kind at this point, but at best you have Product/Solution Fit.

Product/Market Fit would come around the Virality Stage, when you figure out how to scale your channels and you’ve moved beyond your early adopters. In my mind if you can move from early adopters to later adopters and still prove Stickiness, then you’re around the Product/Market Fit stage.

Notion helps team leaders track metrics over time.

Lean Analytics has a wealth of compelling advice for establishing a metrics practice, and you should read the book for great advice on how to succeed in all phases of company lifecycle. One of the important distinctions among metrics that Ben and Alistair outline is between “experimental” and “accounting” metrics, something that I think a lot about at Notion, where we’re helping companies to track their internal data over time. In an article on the One Metric, they point out two approaches to make data actionable:

For “accounting” metrics you use to report the business to the board, investors, and the media, something which, when entered into your spreadsheet, makes your predictions more accurate.

For “experimental” metrics you use to optimize the product, pricing, or market, choose something which, based on the answer, will significantly change your behaviour. Better yet, agree on what that change will be before you collect the data.

Often, “analytics” is a word that primarily is associated with “experimental” metrics, but data that can help make your predictions more accurate can be game-changing. This made me think about the difference that’s often cited between “lagging” and “leading” indicators, which I asked Ben to clarify.

A lagging indicator is one that “reports the news.” It tells you about something that’s already happened. A good example is churn (say the # of customers that abandon your service on a monthly basis.) When they’re gone, they’re gone. It’s hard to get them back. And if churn is too high, and you work to improve it, the cycle times are often slow because you have to implement a change and then see if it impacts customers over time (before they churn or don’t) to measure progress.

A leading indicator is one that “predicts the news.” By that I mean that you find a number that’s correlated (or causal) to something else that matters to your business. An example is customer complaints. You can track customer complaints very quickly (in real-time if you want, or daily) and if you see complaints going up you can react very quickly. It’s hard to react to churn as quickly. But customer complaints is likely a leading indicator of churn — if it’s going up, you can bet more people will abandon your service. You can also dig in very quickly to why complaints are going up, fix the problem, and be confident that if you lower complaints, churn will also drop.

Most of the time you start by tracking lagging indicators (another example is revenue — it’s a lagging indicator because it doesn’t really tell you *why* someone purchased, and it’s hard to test/experiment with.) If you can figure out what leading indicators predict revenue, you can experiment with those much more quickly. So ideally you move from tracking lagging indicators to identifying, tracking and experimenting with leading ones.

If you can figure out what leading indicators predict revenue, you can experiment with those much more quickly.

Since Lean Analytics has changed my thinking on many aspects of growth and product development, I asked Ben what kind of changes has he seen in companies that start using Lean Analytics as a resource for establishing their metrics and KPIs.

Generally what I’ve heard is that Lean Analytics has helped companies structure and codify a better process for new venture/product/feature development. It’s provided them with a framework (without being overly prescriptive and overbearing) that they can go back to as a reference guide over time. I’ve seen teams and companies implement Lean Analytics (or pieces of it) and start to ask more honest questions about their progress and how they build things. They find it approachable and usable in day-to-day, which is great because it goes beyond just the “theory” of things.

Thanks to Ben for his time and excellent thoughts. I hope you’ll check out Lean Analytics to get a great perspective on how even a little data can have big impact on the way you grow your business, become customer-focused, and learn continuously.

You can find other articles about Data-Driven Product Management at the Notion blog and at Product Coalition. Please follow me if you’re interested in data, learning, and product management. Thanks!

--

--

Learning addict. Canadian. Founder of something new. Ex-Marco Polo, Notion, Olark, Indie Film. Curious about creativity, tech, & people. linkedin.com/in/xplusx