Build. Measure? Learn?

Everybody knows the mantra. ‘Build Measure Learn’. And yet we see many clients stuck in a never-ending build cycle.

Building is comfortable. It’s natural. (So natural that teams start building before they're ready.) So then your team drops a load of features, and the next day there is a new sprint with a backlog of features to deliver. And maybe you’ve seen this: the day after deploy, with twelve hours of metrics, there is a verdict:

Nobody is clicking on the new button.
Our ‘MVP’ was too ‘M’ to be 'V'.
Oh no--our users don’t even understand how this works.

A new build cycle starts with almost no time for reflection, learning or hypothesis testing.

Sometimes the best thing to do, after a release, is to get your team off of that particular nest of features and let the release gel. Give your users a chance to figure things out. Watch them to see if they end up at a new level of utility. Fix the important bugs. Talk to users about what they are experiencing. Work on refactoring. Answer support calls. Polish the feature set you released the sprint before. Build a prototype of a dream feature. Rent out your dev team to another company. Run a bake sale. Take notes. Measure. Learn.

It’s easier said than done. The memory of how this thing works is fresh in everyone’s mind. Everybody is dissatisfied with the stories that had to be kicked out at the end of the sprint. Everybody wants to keep working on the thing they were working on before. Nobody wants to lose momentum.

But in order to build with intent, you have to wait for understanding to come. Here are a few ways to hold space for ‘Measure | Learn’ in your development cycle.

Don’t deploy if you can’t measure. Even if it’s manually combing through event logs, or looking at basic analytics, make sure you will have strong enough signal that you can distinguish between ‘awesome’ and ‘less awesome’.

Know what you’re going to measure (and for how long) before you make your next build decisions. Have a defined period during which you’re going to let the feature release ‘unkink’, while you give your users time to adapt. Set expectations that your next actions on this feature set will be based on metrics that are allowed to gel over time.

Have a structure for what you might learn, and how you would act on it. For example, suppose you remove a barrier to sign-up, in hopes of getting more completed registrations. In fact, you are aware that removing that barrier may only result in more unqualified leads getting into your support queue. Brainstorming what might happen if your hypothesis is wrong will help you include metrics (e.g. the number of support interactions) that could help you correct course more quickly than if you focus on metrics that can only validate your hypothesis. If you can’t think of anything you would do differently no matter what you saw, then you are actually still in the ‘Build’ part of the cycle.

Engage the team in delivering the metrics. Most teams feel they’re on the hook to deliver the product, but delivering the metrics is up to somebody else (sales? marketing?) If the team is engaged in delivering metrics, then those metrics become important to the team. You’ll likely also need to enlist some engineers to help interpret the metrics, because they know things you don’t know about how the features were implemented.

Get help interpreting the metrics. We all suffer from confirmation bias. The same data can be interpreted as invalidation to some, validation for others, so get some different perspectives. Stay accountable to yourself and your team to invalidate as well as validate a hypothesis. And keep paying attention to metrics even if they look like bad news. (There is almost always bad news before there is good news.)

Talk to real people. Revisit the concept tests, but this time do it with the product you actually built. Gauge how well your paper prototype helped you to anticipate the final reaction of the users to the product as delivered. How would you prototype (and test) differently next time? Higher fidelity? Lower fidelity with more rounds of feedback? You may find out something that you missed the first time, because of the lack of fidelity in your concept or usability test.

Until you’ve had a chance to learn, put the ‘build’ focus somewhere else. Having two or more independent product components to work on is a good idea. That way, you can transition to the second area when you have a ‘measure / learn’ opportunity in the first area.

At Lab Zero, we never get tired of building new things. But decades of experience have helped temper that enthusiasm to make sure that each iteration delivers value on top of the last one. Holding space for ‘Measure / Learn’ makes all of our ‘Build’ more meaningful, and ultimately, more valuable.
 

Continue the conversation.

Lab Zero is a San Francisco-based product team helping startups and Fortune 100 companies build flexible, modern, and secure solutions.