November 18, 2013

Splitting Hairs: Conversion optimization vs. hypothesis validation

In meeting with many amazing brands each week that are smart, well-staffed and committed to A/B testing, I observe a common trend in how these businesses talk about and organize around the practice.

Shadow Development

Commonly, in these businesses, the design, engineering and product management resources are consumed entirely by major initiatives, driven by executive strategy or a perceived need to chase “table stakes.” Because there are scant resources inside these organizations to do much more than has been asked of him, the idea of A/B testing, a pursuit that the whole company agrees is important, gets pushed out to either the Marketing or Analytics teams. These teams, not traditionally staffed with dedicated engineering resources, then rely on their own skills, the testing software itself, or a small “shadow development” team to build and QA tests.

This approach to A/B testing, compensating for a lack of bandwidth in the legacy product development and design process, ensures that A/B testing is a parallel pursuit, one entirely measured by the ability to improve conversion rate or top-line KPIs through experimentation. However, this pursuit does not get the benefit of defining product development and design hypotheses. As such, the product roadmap continues to be littered with unvalidated assumptions, waste, opportunity cost and risk. Sure, the business is testing (check) and conversion rate is hopefully inching in the right direction (check). But the larger organization and the major digital product investments are no more data-driven, leaner or likely to succeed as a result of the testing competency. In these companies. there is the “testing group” and then there is the “product team or council.” And the two do not get the full benefit of the other. This is the best description I would use to describe what is commonly called “conversion optimization” in e-commerce or digital product organizations.

Data-Driven Product Development

“Hypothesis Validation,” on the other hand, is a broader, organizational approach to product development and design. A/B testing is a method commonly applied to validate hypotheses. But it isn’t the only one (there is user testing, historical data analysis, voice of customer data/survey analysis, etc). Additionally, conversion optimization is a key benefit of hypothesis validation. But that is not the only benefit of hypothesis validation. There are three ostensible goals to this practice:

  1. A smarter organization
  2. Reduced waste and opportunity cost
  3. Improved performance of business KPIs

Numbers one and two above are only possible by re-thinking the “who” and “why” of A/B testing inside the organization. If relegated to a parallel or “shadow” team, the learning benefit remains fragmented and the waste in the major product investments is unchanged. To harness the full benefit of this amazingly powerful capability, the business should ensure that (a) tests are driven by hypotheses that genuinely reflect both the greatest opportunity and the greatest contemplated resource investment and (b) that the product roadmap and major marketing campaigns are increasingly informed by validated learnings derived from testing.

Clearly, I am making a key distinction and prescribing a preference and solution in the simplest possible terms. In reality, these changes take time and contemplate disruption. I would argue that the risks of not tackling this head on, though, are far greater. The cost of investing in a new A/B testing program without entirely connecting it to a hypothesis driven product and marketing philosophy ensures more cost (tools and people), occasional KPI improvement (nice) and limited/no impact on the efficiency of product development or marketing (true).

Ask Yourself

A thought experiment I often pose in conversations with entrepreneurial executives who want to be more data driven involves asking the following questions:

  1. “When you define and approve features or work to be designed and engineered as part of a product roadmap, (how) are the underlying hypotheses captured and validated? How do you know it your roadmap is generally optimized?”
  2. “How would your business change if all product roadmaps were evaluated by the extent to which they are informed by validated learnings derived through data and/or testing?”
  3. “What are the key moments (meetings, definitions, requirements, approvals) in the formation of product roadmaps and digital marketing plans were you can ask the organization: (a) What is success for this initiative? (b) Are we able to capture the data necessary to measure success? (c) What are the hypotheses driving this initiative? (d) How can we validate or test those hypotheses?”

Clearhead exists, in part, to help our clients take advantage of the amazing opportunities enabled through the benefit of digital business data. Sometimes that help is simply the extra muscle required to work with their existing tools and team for greater output. And, other times, that help is assisting senior executives in re-thinking their organization, workflow and deliverables to task their data with the most important job we can imagine – the validation of core value hypotheses and exciting growth hypotheses.