Forward Digital

View Original

FAIL FASTER? Not in Product Experimentation

Important differences in product testing

Marketing testing has always been about the ability to "fail faster" (as famously said by Avinash in his 2009 "This I believe" Occam's Razor manifesto.) But in product testing, failures can be incredibly costly—especially if engineering has taken the time to build anything for you to test—and product testing is frequently completed server-side or even conducted in the physical world. Because of these expensive risks, it's more important with product testing to fully understand the customer journey—qualitatively and quantitatively—before you start testing.

1. Research first
Ask, What do I want to learn?

To earn that understanding, you’ll need to analyze both KPIs and KPwhys (credit to Paula Sappington, who first said this to me.) On the KPwhy side, it's essential to understand and empathize with your customer’s pain points, which become the basis for your qualitative research. 

Grounding product development in customer pain points ensures that products are solving real problems and meeting actual needs in the market (rather than wasting cycles on expensive “fail faster” trial and error methods that work great in the marketing testing world). Metrics are often fiercely debated here, but the simple answer is to listen to your customers.  

In his TLC talk inspired by Paula’s KPwhys, Matt Gershoff talks about the inspiration for the Conductrics platform shifting from a traditional, largely ML decision engine to become the “Unified Customer Experience Platform” it is today with the addition of integrated surveys that measure customer satisfaction, NPS, customer experience, task completion, even open text feedback.

2. Practice customer empathy to define what success looks like
Ask, What do I need to measure to learn that?

The key thing Matt discussed in his talk about the importance of adding qualitative research was what he called “big NPS.” The idea of “big NPS” is that the primary purpose of organizations—the reason we exist—is to enrich customers' lives.

On the other hand, the “little NPS” (the metric we all know and love to hate) is just a measure of how well (or bad) we are doing to enrich customers’ lives. So Matt (paraphrasing Frederick F. Reicheld) encourages us to think of analytics and experimentation as less of a factory spitting out new products and reports and more as a front line for the company to listen to the customers and advocate for what needs to change.

3. Use rapid experimentation methods to gather evidence to support your build
Ask, What would I need to build to measure that?

Once you fully understand the customer pain points, you can use the rapid experimentation methodologies to provide supporting evidence. This is the evidence you should use to develop any new features for product experimentation.


Just to reiterate: The evidence that you gain from testing can’t be obtained without doing the previous steps. 

Why? As mentioned earlier, you’ll be doing much of your product testing server-side, which means you are literally developing and launching something new for your website—that also must not break things! If you’re going to have the confidence to do this, you need evidence first. And it’s that evidence on your engineering roadmap that will keep your costs low and your hair from catching fire, especially if you’re one of the unfortunate majority who does not have the sample sizes needed to be able to run an experiment in a reasonable timeframe without taking incredibly large swings. 

You’ll want a ton of evidence upfront to take those kinds of swings. Here’s how to get it.

With your KPwhy evidence in hand, plan intelligent experiments to layer on quantitative data. Here are some ideas you can try: 

  • Judo test: Do this test if your product team has a new feature or product they want to develop but they're not sure customers will respond to it. If a competing company has this product, put the competitor’s product or feature on your site (change the design so it's not obvious), and see how many clicks you get (obviously so they can't order). Have a plan for how many clicks you'd need to justify the next test or development). 

  • Front door/painted door test: Similar to the Judo test, this test just allows you to quickly find out if your customers might be interested in a new product feature or capability–without you having to develop anything. Simply put a link to that product feature or capability on your site as though it is available.  How many clicks on that feature did you get? Enough to justify the development?

  • Dry wallet test: Use this to test revenue models or specific pricing before rolling out the product. The key here is to not create the full backend experience. You don’t need to be able to process payment using the new payment method you’re testing—fake it. 

  • Sketch test: Consider creating a sketch test where you ask actual customers or prospects to react to a drawing that you sketch out on paper - maybe of an app - with real-time editing (it's easy to do with paper and pencil!)

Nailing product experimentation

The standard product lifecycle is: Build ➡ Measure ➡ Learn (hopefully). Unfortunately, we so often fail to learn anything because we didn’t take the time to plan the definition of success into the build. Getting Product Experimentation right means turning that product lifecycle around by first asking the question: What do I want to learn? Then: What would I need to measure to learn that? Last: What would I need to build to measure that? Then, and only then can we start to Build ➡ Measure ➡ Learn ✨guaranteed ✨

Credit where credit is due

Much of the above was learned from and inspired by co-teaching D4D and experimentation right practice workshops for years with Bri Amarillas, Innovation Catalyst & Sr Marketing Manager at Intuit. Additional resources not referenced above are listed below.)

Additional resources

Want to talk more about product testing? Got a different POV?

Let’s connect.