Last week, Clearhead joined Optimizely for their Spring Into Action webinar series. The webinar focused on helping digital retailers develop test ideas that would impact critical KPIs. Rather than present a laundry list of successful experiments, we shared an actionable framework for developing and prioritizing hypotheses.
Let’s get real: test ideas are cheap. There are multiple sites dedicated to testing examples and case studies. Yet, having tested thousands of hypotheses for our clients, we can tell you that there is no such thing as a universal best practice. Even in the same industry, every business is unique, with unique problems that require unique solutions.
The magic of optimization is not implementing 10 ideas that worked for another brand or one idea that worked for 10 other brands. The magic is mapping back prioritized hypotheses to problems to goals. Metrics will rise in correlation to your ability to solve problems.
So, rather than working through a list of random test ideas, wouldn’t you rather have a repeatable process for producing and prioritizing the right hypotheses? The results from this unifying framework are test ideas that:
- Align to your biggest business goals
- Map to your biggest business and UX problems
- Impact an important customer segment at a critical moment
We have seen that great hypotheses are a direct result of understanding the context of your unique business and customer experience problems.
While we don’t believe in universal test ideas, we do believe in a unifying test ideation framework. Every change and every investment – front-end, back-end, end user or business user – should be managed through a process of data-driven problem and solution mapping.
If you follow the framework, you will answers two vital questions about every experiment:
- What problems are the changes solving?
- How will you know if the change was successful?
These two questions are the baseline requirement for investing time and resources at Clearhead. We have never found a decision so big that it is exempt from this process. In fact, the bigger the goal or problem, the more it should be held to this standard. Experimentation is not only about small, iterative change. It also applied to large, time-boxed disruption.
With that established, let’s dive into the simple but rigorous approach we take for getting to great solution hypotheses.
1. Start with goals
In our experience, when we ask clients if they know and agree on their goals, everyone in the room will nod their head. Yet when they are pressed to define those goals, there is often ambiguity or conflict. Being on the same page is vital for developing the right test ideas.
Ultimately, the business owner should produce a short list of goals the entire team can work towards, then department and individual goals can roll up to those master goals. When it comes to goal definition, ask yourself:
- Are they widely understood?
- Are they actionable?
- Are they measurable?
- Are they time-based?
- Do they have refined baselines/targets?
A bad example of a goal is: “We want to improve the mobile user experience.”
A good example of a goal is: “In 2016, we want to increase smartphone purchase conversion rates by 10% year-over-year.”
2. Next tackle problems
Once you have defined goals, move on to problem identification. This is usually much easier than defining goals since problems are more readily apparent and seemingly limitless. But there is still an art and science to good problem definition.
The most important thing to remember is to avoid sneaking a solution hypothesis into your problem statements. Problems should be driven by data, not backed into from a solution. Other things to ask yourself as you capture important problems:
- Why is it a problem?
- How does the problem impact the business or end user?
- Is it a root problem?
- Do you have data to confirm that it is a significant problem?
A bad example of a problem is: “I think we need a responsive website.”
A good example of a problem is: “Our customers have a problem understanding the difference between and value proposition of products on PLPs.”
Finally, determine which problems are valid, important and worthy of hypothesis development. You may find that some feel right but require further customer or analytics research to warrant an investment. Others, even though they are valid issues, might not directly influence your goals. Prioritize your problems based on these criteria before you jump into solutions.
3. Then develop hypotheses
Now, finally, it’s time for test ideas. Once you have prioritized problems that are mapped to your goals, you can move on to developing hypotheses. Each hypothesis should be clearly mapped back to a problem to ensure you are heading in the right direction.
As you create hypotheses, imagine that your audience is a designer, a developer and an analyst. Each of those people should have a good idea of how to execute that experiment based solely on your hypothesis. So when you articulate a hypothesis, try to:
- Be specific about the change you are contemplating
- Describe what you expect the result of that change to be
- Clearly define the metric you can most confidently measure success by
We also recommend that each hypothesis start with a declarative “I believe that…” statement followed by an aspirational “If I am right…” statement.
A bad example of a hypothesis is: “I think we need to redesign checkout.”
A good example of a hypothesis is: “I believe that if we add geo-targeted shipping commitments on the shipping page of checkout, customers will have more confidence in ordering from us. If I am right we will see increased conversion & reduced abandonment at the shipping page of checkout.”
If you follow these steps, you will end up with a map that includes a few goals, many problems and potentially even more hypotheses. Everyone will be working in concert, focused on the same goals and problems, headed towards the same definition of success.