Clearhead

The digital
optimization agency

We provide daring, entrepreneurial digital executives with the extra brains and brawn needed to fully utilize digital analytics and A/B testing practices and to drive disruptive results, change and learning.

Contact Us

Our Services

We provide our partners the full breadth of services required to build and sustain a data-driven organization.

Strategic Consulting  Strategic Consulting

Before companies begin to significantly resource analytics and testing in their organization, they often ask themselves “What does it mean for us to be data-driven? How would it even work? What would it look like?” Because it’s difficult to rebuild the ship while you’re already at sea, we help our clients re-imagine a “digital optimization” organization from the 50,000 foot view to the ground floor.

Organizational Blueprint

Based on a thorough assessment and synthesis of your current approach to analytics and testing, we design a new blueprint for organizing team members around the utilization of data.

Business Process & Workflows

The roles and chart is simply the first step. An optimization program requires definitions, deliverables, workflows, service levels and plans to get things running. We help bring the blueprint to life.

Best Practices

We know the tools, strategies and trends that are working (and not) in the market. We can help you avoid common pitfalls and adopt better tools and methods. Just ask us.

Training & Education

Digital optimization includes new concepts and practices that need to be described through a simple vernacular and workshops that are accessible and (yes) fun. We not only fish for our clients; we teach their teams why fishing is fun, how to fish and, then, we take them out fishing with us.

Testing, Analytics & Optimization Support  Testing Analytics and Optimization Support

Once the digital optimization model is designed and ready for action, our clients often find that they need extra muscle and brains to augment their current team or scale the operation. We offer a team of nimble, expert, Clearhead-trained digital optimization all-stars to dive right in and bring the promise of Clearhead to life.

Support
  • Instrumentation Assessment
  • Tool Selection &ROI analysis
  • Tool Implementation
  • Instrumentation Operations & Support
  • Employee Training & Workshops
Performance Measurement
  • KPI Definition
  • Baseline & Target Development
  • Dashboard Design
  • Dashboard Instrumentation & Automation
Hypothesis Development & Validation
  • Generative Research
  • User Testing
  • Hypothesis Prioritization
  • Test Planning & Design
  • A/B & MVT Testing
  • Test Analysis
  • Understanding & Adoption of Results
  • Segmentation Targeting & Personalization

Needle Movers. Hypothesis Validators. Digital Innovators. Debate Enders. Free Thinkers. Stats Nerds. Change Agents.

We are all of the above.

Why Clearhead?

1. We were you

Once upon a time (not very long ago) we were the client – leading our own ecommerce and product development teams. We carried the responsibility of delivering on P&L commitments, developing our organization and driving growth. For years we wrestled with, and continuously refined, a process for managing teams and business decisions in a simple, data-driven, transparent way. Clearhead is the culmination of that experience.

2. We end the dabbling and the debates

We are obsessed with how a lean, data-driven practices drive smarter and more profitable businesses. If you fear that, in your company, “analytics” has become a euphemism for over-reporting and that your big decisions are still mostly driven by gut, we can help you change that (quickly). A clearheaded digital business is one that has a merit-based, sustainable approach to inspiring, validating and pursuing ideas that are most likely to improve business performance and satisfy customer demands.

3. We build a better engine

Analytics is not a tool or a bunch of really smart people staring at numbers. Testing is not a simply designer with some JavaScript chops. Analytics and testing -- the people, processes and tools -- are all part of a singular commitment to employing data to develop, test, validate and act on ideas that are most fruitful and least wasteful. We call that commitment “digital optimization.” We take all of the parts -- the ones you have and the ones you need -- and combine them to build you a working model for data-driven optimization that works and lasts.

Our Process

The “Clearhead Process” is a simple, but meticulous, step by step approach to developing, prioritizing, validating and learning from well articulated digital business hypotheses. We have designed our process in such a way that it integrates well with most any set of stakeholders and roles and can be easily understood, adopted and replicated by even the most complex, legacy organizations.

Align

Vaildate

Optimize

Sustain

Our Team

Matt Wishnow

BIO

Ryan Garner

BIO

Jared Bauer

BIO

Brian Cahak

BIO

Laura Stude

BIO

Ryan Abelman

BIO

Tom Fuertes

BIO

Change faster. Win faster. Fail faster. Learn + improve continuously.

Our Clients

Contact Us

Our Blog

  1. What it (Really) Takes: Three Absolutely Indespensible Elements for Digital Optimization

    The internet is full of compelling examples of "winning" A/B tests that promise marked conversion or revenue growth. Similarly, there are countless articles about what it takes - from executive sponsorship, the cultural development to the critical roles - to develop a successful testing & optimization program. We are active participants in these discourses. We have contributed to both. They are both - the test “case studies” and the discussion of culture — very necessary requirements for inspiring more confidence in the practice of A/B testing, in particular, and digital optimization, in general.

    But let me say that these conversations are increasingly reductive narratives that do a disservice to the long term work and benefit of continuous testing and optimization. Yes — you need to have winning tests. Yes — those are good. Yes — you need organizational buy in. Yes — you need a testing tool that works for your company. Yes — you need skilled front end dev resources, skilled analysts, experience designers and product managers. Those are all table stakes. All of them. All testing programs require them, but these are not the elements more correlative to a program’s success.

    The critical elements that are most necessary to achieve long term success in testing and optimization are not the sexy ones. They are the behind the scenes guts and logic of the operation. They require tremendous thought and human talent to develop, but, ideally, they run quietly and effectively behind these scenes. Without these elements, you may have a great number of winning A/B tests. You may have executive support. You may have a great team. But you will (a) only be scratching the surface of disruptive success and (b) always be one tough question from a CEO/CFO/CTO away from an existential crisis for your testing and optimization program.

    So, what are these three (3) critical, infrequently discussed elements? They are:

    1. A replicable process for capturing and prioritizing the most relevant testable hypotheses.
    2. A transparent process and forum for identifying, discussing and mitigating key program risks.
    3. A simple, honest view of program health, success and investment.

    Let’s unpack these. 

    Element #1, a replicable process for capturing and prioritizing the most relevant, testable hypotheses, first requires a method for capturing hypotheses. To “capture” something, you must first know where to look, then how to identify them and, finally, where to hold them, once captured. I have previously written about where to find relevant test hypotheses, but, suffice it to say, your historical analytics data, user-tests, customer feedback and product roadmap are rich areas to be synthesized in accordance with your KPIs. 

    Even if you know where to look, I’d suggest that many of your co-workers (even the very bright ones) may not even recognize what a hypothesis is. Strongly supported opinions, approved projects and long-held beliefs, assumed to be true, are more often than not simply hypotheses begging to be tested. Scratch the surface of a marketing plan or your consumer research and you are sure to find them. The great hypotheses are often the ones that people seem to accept as fact and are resistant to test. That is often precisely where disruption lives.

    Articulating the hypotheses, though, is only part of the work. Harvesting this tribal knowledge — recording it, appending it, commenting on it — is also a necessary component to sustain the program’s energy and intelligence. You may use a shared drive or a SaaS tool (like UserVoice) for this. We have worked with several. None are perfect but all are better than a Word or Excel doc because of their collaborative features. The point is that all of those ideas need to be captured, tracked and stored for posterity in a place that is easily accessed, searchable, indexible, etc.

    And finally, these ideas need to be prioritized via a simple and practical logic that ensures that the most relevant and impactful ideas surface from the pact. While everybody loves to test the “low hanging fruit,” there is massive opportunity cost to dabbling and avoiding those hypotheses that challenge impending feature launches or long-held beliefs. A good prioritization methodology accounts for the real cost (effort), the opportunity cost of NOT testing (relevance), the forecasted impact AND speed at which it can be validated. Every organization will have its own (likely similar) approach to prioritization, which is as much art as it is science. It’s critical, though, that your process is transparent and has a scoring methodology that you can point to as a dispassionate means for ending debates and moving forward. At his Opticon talk this week, my co-founder, Ryan Garner, will talk a bit about how we score and prioritize test ideas.

    Element #2: a transparent process and forum for identifying, discussing and mitigating key program risks. On the surface, this might seem like project management 101. And perhaps this inclusion only underscore the point that A/B testing and optimization require tremendous project management. But I bring up risk mitigation not to simply highlight the test by test dependencies but also to surface those opinions and deep seeded fears that threaten to completely de-rail the momentum (or existence) of an optimization program. Perfectly fair questions related to site performance impact, the injection of “outside code” to production, MVP-style (quick and imperfect) creative, false-positives and inflated lift projections will assuredly crop up. They can’t be handled with “kid gloves.” They warrant respect but not dread or fear. They need to be addressed head on. Each of them is a legitimate risk that can be mitigated. Frequently, we see testing programs dabble endlessly with small tweaks and low risk tests, only to be dismayed by very mixed results. Why is there so much dabbling? My hypothesis is because those programs are afraid of confronting (perhaps) more senior executives fears that:

    • Testing will slow down website performance
    • Testing will break the website
    • Testing will slow down product development
    • Testing will introduce sub-par design experiences
    • Testing will confuse or alienate customers
    • Testing results are not credible

    Go on. Keep going, It’s healthy to get it out. Each one of these perceived risks is absolutely legitimate and also absolutely able to be addressed, better understood and mitigated or completely remediated. There are also risks associated with each individual test, but I will stick to the highest level for this post. The same suggestion applies though. Identify risks and mitigation plans thoroughly. It takes one well-placed cynic in the organization, who feels like their concerns were not addressed to throw a program under the bus and/or into defensiveness. 

    We suggest that, before you dive into your very first tests, as you are codifying the mission and process for your program, you capture and address all of the most deeply held, “scary” risks. But don’t stop there. In reviewing your program health and value (element #3, below) you should continue set aside the space to identify new risks and commit to thoughtful mitigation plans. While it is unlikely that anybody in your organization is dead set against testing, you will be (probably not be) amazed at how fears can fester if not aired. 

    In their Opticon talk, Ryan and Jessica will get very real about questions, risks and concerns that you are likely to confront as you push this boulder up the mountain.

    Element #3 is your scorecard. It is, quite simply, how you and your organization measures the value of your testing and optimization program. This scorecard should be easy to maintain, easy to access and and easy to interpret. A shared Google spreadsheet will most certainly suffice though a pretty Keynote or PowerPoint version might serve you well in executive read outs.

    The scorecard should, minimally, document the following:

    • What is the status of the approved hypotheses?
    • Which tests have been completed?
    • What was the level of effort/investment in the execution of the completed tests?
    • Of those tests, what was the test metrics?
    • Did the hypothesis prove true for the test metric?
    • What percentage of your completed tests showed the hypothesis to be true?
    • Was there observed revenue lift during the test period for the winning variation?
    • If so, how much?
    • Did any of the results, in proving false or inconclusive, yield the reduction of opportunity costs?

    These, in my opinion, are the most basic items to track. You could certainly contemplate more derived metrics around velocity, ROI, etc. But, I would strongly caution against the temptation to simply annualize and sum up revenue lift. Such blunt estimates — often conflating prediction with fact — are bandied in blogs and conferences. If you ask an A/B tester how much revenue lift they have created and they answer with a specific figure, staggering or modest, you can be assured that they have fallen into your trap. We have seen companies brag about lift that is greater than their actual annual revenue. And, all it takes to deflate these claims are the following questions:

    • What is the margin of error for your calculation?
    • How did you forecast out annual lift based on your limited data set?
    • Did you retest that precise test again and again to validate the observed lift?
    • How have you accounted for other market and product changes that might positively or negatively impact revenue since you launched your winning variation in production?

    Somebody in your organization may force you to claim a single revenye lift figure. Resist it as long as you can. Focus on “observed” results. If still pushed, forecast longer term benefits for individual tests with margins of error. If you are still pushed to the edge of the plank, turn around, take off the blindfold, and, before you jump, ask (a) do we believe that our tests are leading to smarter and more successful digital product development and (b) are you confident that, if we continue to sustain the current testing and optimization program we will be more profitable than the alternative? To be sure, there is as much art and blind faith as there is data and dogma in the world of optimization.

    I have no doubt that the “big winners” will continue to grab headlines. Similarly, I have no doubt that organizational change, in the interest of being data-driven, will be uniformly promoted and applauded. I say “hooray” to both. I support and applaud any of us behind the winners and the change. But, in order to sustain both endeavors, I’d suggest it is critical to understand and adopt those elements that enable scale and reduce friction. And, in our experience, those elements that best predict success and help avoid existential crisis are:

    1. The hypothesis development & prioritization process
    2. The risk identification and mitigation process
    3. The program scorecard

    Go get em!

    — Matt

  2. Got Valid Test Data? Tips To Help Make Sure You Do

    It happens…

    You go to look at your test data and something seems odd. Sure, maybe your conversion rate for one variant really is that much better than another. Or, depending on how you have divided your traffic, it does make sense that one variant has 50% more visitors than another. But, in situations where things just don’t feel right we suggest following your gut and dig in a little deeper to ensure that your data is valid. Here are a couple steps to ensure that your data comes in right the first time and that it is good moving forward:

    Quality Assurance: Prior to test launch, create a checklist of all the steps a user is able to take pertaining to the test/test page. Go through each step ensuring that a user will be able to complete each step, with each browser you are including in the test. Log any bugs and be meticulous. During this phase you will also be able to see data coming into your testing tool and, if integrated with your web analytics solution, you should be able to make sure everything looks good there.

    Post-Launch Monitoring: The first 24 hours after launching a test require close monitoring. Keep a close eye on data in both your testing tool and web analytics solution. Because post-launch monitoring is a critical step, we never recommend launching a test on Friday as you may find yourself in a situation where you need “all hands on deck” to identify and fix an issue.

    In the event that you do notice an issue, here are a few scenarios and places to start to more quickly diagnose the problem:

    Problem: Goals are showing no data.

    Place to Start: Within either your testing tool or your web analytics solution start by making sure that the goal is set-up properly. Know what triggers the goal and triple-check that it is set-up properly. Ask if there are use cases where a user could complete the goal without it being included in your data.

    Problem: The testing tool is showing data, but my web analytics solution is not.

    Place to Start: Make sure that you have integrated the tool properly. Are you checking the right Custom Variable (GA), or eVar (SiteCatalyst), or other?  Is a redirect in place that is not giving enough time for the call to be made?

    Problem: One variant is showing a much lower conversion rate.

    Place to start: Despite QA efforts, sometimes bugs make it into the live test. Segment out the variant that looks to be under-performing, starting by looking at conversion rate for the variant by browser version.  

    As with all tests, check your data early and often to ensure your results are based on fact. Have questions?  Shoot us a note, we’d love to help. 

    And, as always, happy analyzing!

    -JB

  3. Sweet Tees

    A little birdie told me that our friends at Optimizely have an appreciation for fun, juvenile high-class humor. Huzzah, me too! And since Opticon is coming up, I thought this an appropriate time to share my musings on the importance of funny tee shirts.

    Those who know me understand that I’m a big fan of the arts…particularly the genre that involves putting funny sh*t on a shirt. My affection for this art form began long before I entered the world of startups and now my drawer runneth over with many cotton gems my mother would probably be embarrassed to learn I wear in public. I’m of the belief that everyone should own a hipster cat tee shirt or two and that company tees should extol the virtues of the company and its mission in a humorous way. As I was explaining this belief to my fellow Clearheaders one afternoon and getting shot down brainstorming slogans for our own shirts, it was decided that we should publicize our list of failed tee shirt slogans. So without further ado, here’s what didn’t make the cut. To find out what did, you’ll have to visit our booth at Opticon!

    • You should get tested.
    • I tested positive for being awesome.
    • This is an A/B conversation so C your way out.
    • My hypothesis is an honor student.
    • Proud parent of a winning hypothesis.
    • I have Control issues.
    • I swear my data is clean.
    • My A/B knows more than your CD.
    • To test or not to test, that is the question.
    • Get tested with us.
    • Data speaks louder than words.
    • Test with the best.
    • supercalafragalistic-expi-A/B-do-shus
    • I targeted your mom.
    • Use data like a drunk uses a lamppost. For support.
    • In data we trust.
    • The data is like a miniature buddha, covered in hair.
    • Test A: She was really cute. Test B: Yeah, I’d data.
    • Just test it.
    • I’m a huge B.
    • Testing status: It’s complicated.
    • Show me your A!
    • Are those A’s or B’s?
    • Front of shirt: I can optimize the size of your… Back of shirt: buttons.
    • 85% of those surveyed preferred my ‘B’ over yours.
    • Clearhead + [Your logo here] = A winning variation.

    And since we have some former music industry folks in the house:

    Leave a comment below or tweet us your favorite failed slogan. If enough people weigh in, we’ll get the winner made AND ship you one. And I REALLY want one of these, people, so help a sister out!

    - Laura

  4. Where do the best A/B test hypotheses come from?

    Ahhh…the question for the ages;  perhaps just a smidge behind, “what is the meaning of life” and “where do babies come from.” Great hypotheses are elusive. They are a secret, almost mystical, combination of art and science. In truth, good A/B test hypotheses come from many places. But, I decided to write this post today in pursuit of not good, but great, hypotheses.

    If time and people are the most valuable resources for your business (they are for ours), then you don’t have the luxury of endlessly testing just any idea. To reduce the possibilities from endless to finite, many smart folks normally recommend a few of sources and methods for starting to develop test-worthy hypotheses. Let’s talk through each one real quickly before we get to what is, in my estimation, the richest source for A/B testing hypotheses.

    FUNNEL/CLICKSTREAM DATA AUDITS

    It is not uncommon for companies or their agencies to start with the bottom of the order conversion funnel and look for major drop-off areas or goals that have suddenly (or gradually) declined in success rates. They might then look at that same data by key customer segments and identify major risks and opportunities. Finally, they might expand the audit further up the funnel to focus on visit, site search and browsing behavior. Without fail, this can be a sobering endeavor and one that will generally bear fruit, especially when the data is effectively segmented. Unfortunately, this data lacks important context. It does not (a) tell the business whether the data is aberrant compared to the market nor does it (b) explain why the data is over/underperforming. As a result, it is often hard to develop a highly specific hypothesis informed by this data set alone. You might be able to say, for example, “I think we can increase the percentage of first time shoppers who go from our Ship/Bill to credit card pages,” but you’d likely be hard pressed to be much more specific than that without entering the realm of speculation (which, incidentally, is an absolutely necessary part of both good and great hypothesis development).

    CUSTOMER FEEDBACK

    Customer feedback comes in many forms, but two resources that are common and generally well organized are (1) customer call center documentation (either manually logged and tagged or entered into a CRM) and (2) voice of customer survey results. This data is generally indexible in some way and provides greater context, right from the source. The resolution of issues and response to customer desires can inspire hypotheses worth testing out more broadly. If a customer describes an unpleasant experience or point of friction on your website, it does not take much to develop a hypothesis for how that experience could be improved. In fact, voice of customer surveys often ask that very question — “how would you improve X or Y.”

    This is incredibly rich data, to be sure. But, it may lack the volume and objectivity of analytics clickstream data and the qualified perspective of financial opportunity. Happy customers are a requirement for any business, but satisfying every customer desire may not always be good business. Some demands might be specific to a tiny segment or, in fact, contradictory to the demands of others. Just because one (or some) customers have had problems, it does not necessarily mean that a contemplated idea for improvement is a hypothesis worth testing. Nevertheless, this well of information, especially when validated by analytics data, is a tremendous source for A/B testing hypotheses. And it’s a lot more fun to comb through than all those tabs and pivot tables from clickstream data output.

    USER TESTING/OBSERVATION

    All of the tactics referenced immediately above (user testing, etc.) combine some of the best elements of the first two tactics described (analytics & customer data) — observed click behavior with some degree (in some cases a great deal) of context around the “why” a goal or event was or was not successful.  User testing or video observation, however, are hard to fully analyze and synthesize at large scale and heat mapping lacks the voiceover context of the former tactics. As generative research goes, these methods are extremely useful and, again, when confirmed by web analytics data, can spawn great testing hypotheses.

    I hope, as you pursue optimization and learning for your business, that you mine all of the above sources for A/B testing hypotheses. However, none are, in my opinion, as rich of a source of great, essential, must-test hypotheses as…your product roadmap.

    PRODUCT ROADMAPS

    Up in A/B testing heaven, every product roadmap is full of projects that are entirely validated by previous A/B tests. Each new design and feature is being built with the confidence that data is on their side. Back down here on earth, however, most product roadmaps are not built with the same confidence. Many of the contemplated projects, in fact, could be described as “speculative” or “risky” in that they have not been tested in minimally viable versions. If the idea has made it to the product roadmap, however, a great deal of time, energy and money are at stake. People are getting mobilized. Meetings are taking place. Plans are being built. Pixels will be pushed. Code will be written.

    It is because of the degree of risk, that, I argue, product roadmaps are the richest source for great A/B testing hypotheses. Although the contemplated projects may have a fair degree of opinion and gut driving them, I would suggest that those opinions tend to be pretty well informed. They normally start with gut observations and assumptions and get refined or loosely validated by anecdotal evidence, data seen in analytics reports and customer service feedback. Rarely are they wild stabs in the dark. A good product manager will have already taken in a lot of data, synthesized it, and developed plans for their product informed by all of the aforementioned sources. By the time it makes it to the roadmap, the idea is highly refined and the risks are high. The assumptions driving the plan simply need to be isolated and articulated as hypotheses. When the risk and reward of an investment is considered high, testing becomes most essential.

    We imagine (dream of) a day wherein the A/B testing and hypothesis validation “roadmap” entirely informs the product roadmap. At that point, the product roadmap becomes more of a symptom of a test-driven culture than a source of new hypotheses. But until that ideal becomes an increasing reality, the most transparent, meritocratic disruption I would introduce to my digital business would be to effectively articulate, capture and test every hypothesis that is at the root of the product roadmap plans. They’re all right there, waiting to be tested.

    - Matt

  5. Does Your Steering Committee Resemble a WWF Cage Match?

    Come on…who doesn’t love a WWF reference when referring to your internal business processes?  OK - maybe I’m dating myself and maybe the cage match metaphor is (slightly) extreme.    

    But, we all know internal political jockeying can be frustrating and exhausting.  We hope this post shows you how to transition the content and tone of your steering committee discussions from emotion and politics based to a dialogue focused on data-driven hypothesis identification.

    Steering Committee’s Challenge - Good Intentions…But Little Data

    In many (if not all) of our conversations with digital execs we hear a consistent challenge:  they spend an inordinate amount of time negotiating with creative, product, promotions, and operations teams to define UX priorities for the online store.  Phew…pass me a drink.  To make the discussions more objective (and not resemble WWF cage matches) clients have created monthly steering committees comprised of cross-functional HiPPO’s (highest paid person’s opinion) to review and approve test plans.  Too often we hear that this process starts off with good intentions but rarely creates the intended value.  It tends to reward political players and debaters, not customer desire.  HiPPO’s drive key decisions based on gut or a sense of internal sensitivities (real or perceived).  We also hear that data and analytics rarely anchor the discussion.  Over time test velocity slows to a trickle and growth opportunities are missed all in the name of power plays and cross-functional peace.     

    Is This A Problem?  We Believe So.  

    Steering Committees should exist to help prioritize hypotheses, make investment decisions based on data, and help the organization learn.  While gut and instinct are important tools, they need to be led by the collective voice of your actual customers (aka data) gained via testing and analytics tools.  Without this data, you can almost guarantee that multiple portions of the conversion funnel are sub-optimized and the company is leaving serious money on the table.  For example, if you’re running a Top 100 ecommerce site generating $200M+ revenue a small 5% improvement in conversion rate would result in $10M annual gain.  That kind of money.  

    So Now What?  Measure What You Want To Manage.

    The winner should always be the customer and the business, not the executive or the opinion.  We’re not Pollyanna idealists, however, so we don’t believe this changes overnight.  To subtly initiate what will become significant change simply start with a handful of steps below.  By doing so you will begin anchoring the conversations in data and (over time) transform the steering committee from a political debate amongst HiPPO’s into a fine-tuned mechanism to capture and prioritize key business hypotheses.  Why?  Because HiPPO’s will begin seeing the transformative power of using data-driven insights over time to grow the business and will allow the change to happen.  Ultimately we all want to grow the business (or risk losing our jobs).  These simple steps provide a cost effective and simple yet powerful means to do so.        

    5 Tips to Stop The Madness:

    Tip #1:  Dig into your analytics to identify weak parts of the funnel.  Look for pages and segments showing abnormal performance (good or bad) and suggest the team focus UX energy on those portions of the site.  Look for trends by cohort.  Benchmark your funnel data against  baselines and targets.  We also recommend you research typical funnel stats of similar businesses and gather test ideas from testing & optimization forums.         

    Tip #2:  Create an objective scoring system to prioritize test ideas.  How complex is the test to run?  If it won, would it even matter?  What business value would the change produce?  

    Tip #3:  Develop a simple means to capture and communicate test plans.  Testing without a plan might drive business value out of luck but luck isn’t a sustainable plan and won’t drive organizational change.  The test plan document not only helps retain institutional memory but it encourages the committee to think deliberately about the KPI’s being measured, goal of the test, etc.  It requires a data-driven discussion to properly complete.     

    Tip #4:  Develop a simple means to capture and communicate test results.  Everyone wants to win.  Show them the test KPI, the supporting metrics, and the performance delta between the Control and the Variation in a succinct one-page format.    

    Tip #5:  Model the ROI.  Now that you’ll have an archive of your plans and results you can measure and illustrate success over time.  Testing is a journey as we discussed in this “Money Ball” blog post.  Build, test & measure, learn.  Fail fast and fail small.  But through measurement you will get smarter, fail less, and win more.   

    The net result over time is less gut, less debate, more wins, and fewer WWF cage matches.  We’d love to hear how you’ve helped drive growth and a deeper understanding of the customer on the back of testing & optimization.  Drop us a line and we’d be happy to share a few tips.  

    Best,

    -Brian