Every change we make to our digital products — small changes, big changes, pricing changes, UX changes, merchandising changes — are all experiments. All of them. Whether or not we test them, they are still experiments. The outcomes are uncertain.
The point of any digital experiment is to identify a solution to a problem that works, or to understand why a solution did not solve the problem. Great digital optimization requires us to look at all of the problems we want to solve for our customers and the business as well as all of our hypotheses for solving them. We are not optimized if we are running niche A/B tests or high-fidelity personalization while larger problems go unsolved and larger hypotheses go untested.
Over the past few years, we have all been flooded with case studies around A/B testing, personalization and data-driven UX design/product development. Inevitably, these pieces leave many digital executives and practitioners wondering, “Are we doing it right? Are we moving fast enough? What is best in class? What is the right way?”
Let’s cut through the noise of the case studies, tweets and speculation and get real: there isn’t one, right way. That’s because optimization looks different for every business based on the goals, problems and solutions unique to you and your customers. There is a narrative in our space that world-class digital optimization is a linear progression that begins with simple A/B testing and ends with incredibly refined or programmatic personalization. While there is virtue in that journey, it is also a false binary. When it comes to the strategy and tactics for digital optimization, there is no “best in class,” there is only “better for your business and customers.”
I look at optimization through the lens of an operator who endeavors to invest time, money and energy into building the most profitable outcomes for the company and the most desirable experiences for customers. In the most successful businesses, those two objectives align. I think of every initiative and change — large or small — as an experiment and investment. You want confident returns as well as the ability to manage a portfolio in a way that aligns with your unique goals and challenges.
The question is not how many tests are you running or how refined are the segments you target. Rather, given your pool of resources, what problems do you choose to solve and how do you validate those solutions? We all have finite time, budget and resources. How are you going to use them and why would you use them that way?
Rather than dividing digital investments by component parts of the product or funnel, front or back end, web or mobile, we organize strategies based on the size, risk and reward ascribed to the Problem (opportunity) and associated Solution (hypothesis). By our count, there are four primary types of optimization investments, defined by your position on the problem. Think of these categories as funds within a portfolio. Each one has a different risk and reward profile, and the mix should align to your own business’ relative tolerance and need to be exposed to short and long-term risk.
Investment Type 1: You know the problem and you are committed to a solution (PS)
These are the digital business decisions you’re already so deeply invested in that you simply have to bring them to market, as well as those you already have extremely high confidence in. These are large investments, which also means they are high-risk hypotheses that need to be tested. You are not experimenting so much as making sure the things that have already set sail come to port safely. Your job is to ensure they don’t hurt the business, and that you understand the ways in which they help the business if they succeed.
Investment Type 2: You know the problem but you don’t have a confident solution (Ps)
You know there is an issue with your digital experience. For example, site navigation is not getting visitors to the right products or cart abandonment is on the rise. It is clearly hurting your business, but you really don’t know what the solution is yet. This is a true experiment around a high-priority problem. Because these are often complex challenges, the solutions have to be researched, broken into component hypotheses, tested, pivoted on and then tested again when a new solution hypothesis is revealed.
Investment Type 3: You don’t know the problem and you don’t have a confident solution (ps)
A good deal of what we see in the market today falls into this category. Practitioners start testing a lot of little things without identifying the problems that they want to solve. They change the color, the copy, they hide it, they move it — all the suggestions you see on lists of generalized best practices. It’s highly speculative experimentation, often more game theory than investment strategy, though this category would also include “open” research and development usually only seen in extremely large or very advanced programs. In general, the idea is to get a lot of tests done very fast and hope something pops up. The hope is that by turning over a lot of stones, you’ll find a hidden gem under one of them, and that discovery might inform future experiments.
Investment Type 4: You have a solution but you are unsure whether it solves a relevant problem (pS)
Sometimes an answer presents itself even though you haven’t asked the question yet. It is tempting to pursue change simply because that change is attractive or the value proposition sounds exciting. “Go responsive!” “Add social feeds!” “Design a mega-navigation!” “Parallax scrolling is hot!” The promise of the solution is enticing, but it’s unclear if it solves any problem you actually have. This specific subset of optimization also includes testing new software-as-service technologies. The same logic applies to testing SaaS as it does to testing UX — the software is designed to solve a problem, so you should be able to validate the incremental benefit from that solution.
Truly world-class optimization requires us to map all of our Solutions, large and small, to the Problems that most affect our customers and our business goals. Once those Problems and Solutions are organized, we can then decide what experiments are worth investing in and how to allocate your portfolio based on criteria unique to your business, including:
- Size of audience/customer base
- Breadth and depth of addressable segments
- Market pressures to change
- Performance of KPIs
- Product roadmap commitments
- Available budgets
- Available resources & skills
To be sure, there is no exact science to this approach. The goal is not perfection, nor is it the number of tests you can run or living up to the promise of case studies. Rather, you are aiming for an open and holistic approach to optimization. The goal is to allocate your resources so you are most likely to solve your biggest digital problems and most likely to understand why a solution did or did not work. If you embrace this philosophy and invest accordingly, you will be best-in-class for your business.