April 22, 2015

4 things you should know about personalization

In our primer, we provided a highlight reel on Personalization. This post will take a closer look at some of our most recent thoughts and questions regarding Personalization, and the way it is being sold and marketed today.

Now that you’re up to speed, let’s get real about what some things you’re likely to be hearing in the marketplace.

1. Defining Manual vs. Algorithmic Personalization

As Matty said in his previous post, you ask a 100 people, you are likely to get a 101 answers on what Personalization is.  For the purposes of this post, here’s the definition we are using.

Personalization = the targeting of an offer, content, or experience based on either an action or actions a customer takes (behavioral), or based on something you know about them (demographic).

So with that in mind, we at Clearhead see  Personalization being marketed and sold in two distinct ways. We refer to the first approach as “Manual Personalization” and the second “Algorithmic Personalization.”

Manual Personalization = A human is deciding which experience should be shown to which segment. Then they are responsible for manually designing and building the offers and configuring the personalization tool to target the experience to the appropriate segments.

Algorithmic Personalization – a computer is auto-magically deciding which thing (product, search result, advertisement, discount) should serve to which user. These decisions are based on a variety of inputs the attributes mentioned earlier (behavioral, demographic).

Both approaches have their pros and cons.

Manual Personalization


  • Typically lower upfront costs and barrier of entry to get started
  • Highly customizable and flexible
  • You can create and serve essentially any experience you can come up with


  • It’s manual which means higher incremental costs along the way
  • Relies on humans to decide the appropriate segments to target and experiences to serve
  • Then you have to design, build, QA, launch, and build it
  • And then analyze the results
  • All of which results in being tricky to scale

Algorithmic Personalization


  • It’s automated, so lower incremental costs along the way
  • Relies on the computer to identify segments and adapt the experience based on user behavior and demographics
  • Results in  relatively low incremental costs across segments of all sizes down to the 1:1 level (the one everybody keeps talking about)


  • Typically requires a higher upfront investment compared to Manual (ex: set-up & config and software license fees)
  • Targeted and served experiences tend to be much narrower and specific, focusing primarily on content
  • Think product recommendations, assortments, search results, navigation links, and sorting facets versus entirely new user experiences (ex: an entirely new checkout flow targeted to cart value or device type 

2. Why Segment Size Matters (A Lot)

Perhaps the most crucial, and overlooked, consideration when embarking on the personalization journey is the understanding of segment sizes, and their impact on your bottom line.  At the end of the day, this is all about the bottom line, right?

So let’s look at some simple math on segment impact:

  • 1,000,000 visitors = your monthly traffic in the United States
  • 10,000 = monthly visitors from iPhone users living in Atlanta, GA
  • 2% = your overall conversion rate for all visitors

Now, let’s say you can manually create a highly targeted offer to those 10,000 iPhone users in Atlanta, GA, and let’s say that that targeted offer can grow the order conversion rate of that segment by 50% to 3% (almost unheard of for order conversion).

So that’s:

  • 10,000 visitors converting at 3%
  • 990,000 visitors converting at 2%
  • = a whopping 2.01% average conversion rate for all visitors.

Was the effort worth it? If this was a manually created personalization campaign, probably not. Now if the targeted segment were 100,000 visitors and you were able to grow their conversion rate by 50%, the average conversion rate would grow to 2.10% for a 5% lift.  Now we are talking.

But remember, this is with an assumed 50% lift in that segment. That kind of improvement is hard to come by.

Want to play with some scenarios of your own?  Check out our “Super Simple Impact Calculator”.

Our observation is that the size of the segment and the opportunity for conversion lift of that segment is vital criteria for deciding when and where to prioritize your efforts, especially for manual personalization.

3. A/B Testing vs. Optimization vs. Personalization, Oh My

One of the biggest ways the latest marketing hype is contributing to confusion lies in the distinctions being drawn around A/B Testing, Conversion Rate Optimization (CRO), and Personalization.

The former two are being characterized  as separate and distinct marketing strategies and are frequently compared to Personalization, as though they were competing innovations.

You’ve no doubt seen headlines like “A/B Testing is Dead. If you don’t Personalize, you will be too.”  Hyperbole at its finest.  So before we drink that kool-aid, let’s take a minute to remember what all this stuff actually is.

We defined Personalization earlier, so let’s take a stab at A/B Testing and CRO.

A/B Testing = a data collection tactic used to measure the difference in performance between one or more changes.

Conversion Rate Optimization = the goal we are all trying to achieve; which is the improvement of the metrics we care about.

With this in mind, how can a tactic for data collection and measurement be dead in the face of the experiences it’s designed to measure? It’s like saying the “Richter scale is dead. Long live earthquakes.”

What’s the alternative way to measure? Pre/Post Analysis?  Just Implement the new personalization experience and hope for the best?

As far as we know, there is still no cleaner way to measure the difference in performance between two experiences than by testing it against a control (including that new  personalized content on your homepage, that new search algorithm, and those new new product assortments).

Any change, even those generated by algorithms, can be tested and compared against a control. Just ask Google. 

4. Why You Should Still Unearth (and Test!) All Hypotheses

Remember: it all starts with a hypothesis! So what are your ideas for personalized experiences you could serve? And to which customers?

If you are buying algorithmic personalization software from a vendor, then there are hypotheses baked into the value prop of the product they are selling you. Make the vendor  clarify and articulate them. As an example,  we see the following quite a bit for product assortment software: “We believe showing an assortment of women’s products to female shoppers by default will lead to more female purchases.”

Once such hypotheses are articulated, you should still validate them by testing them against a control using the tried and true data collection tactics of A/B and MVT testing. You’d be surprised what you might learn. Taking the example above, one of our clients learned that the female shoppers on their site were actually there to shop for their husbands and children as often as they were there to buy for themselves. The vendor’s original hypothesis was invalid and actually hurt sales among females.

But because we tested this  hypothesis with our client, they learned quickly, minimized costs, and have since made adjustments to the algorithm that are now returning real value.

Note that if you are starting from scratch and per the pros and cons listed previously, we recommend dipping your toe in the personalization waters with a manual (concierge) approach targeted to larger segments.  One we love to start with is new vs. returning.  They are both typically sizable segments with materially different behavior patterns that are ripe for unique content and experiences. In addition you can try targeting your biggest referral sources, logged-in vs. logged-out users, and past purchaser vs. never purchased.

Wherever you start, we recommend learning from a lower upfront cost, manual approach (MVP) and then evolving into more ambitious, and eventually automated, experiences.

Hopefully we have provided you with some good supporting context and deeper thinking that can guide your organization further along your personalization journey towards real optimization. Just remember that the Personalization highway is long and winding and never binary or linear.

To help get you farther down the path, we’ve added a curated list of a few of our favorite A/B testing and Personalization case studies below. Feel free to reach out with any questions.

Thanks again for reading,


Ryan Garner is the co-founder of Clearhead. He oversees optimization programs for all of Clearhead’s amazing clients. When he’s not busy testing or raising his two young sons, his favorite hobby, like so many in Austin, is talking about how awesome Austin is.

Case Studies:

Clearhead powers the optimization programs of many awesome companies like Patagonia, Adidas, Blu Dot and more. We’re proud to have generated a massive archive of experiments, many of them highly targeted and personalized in nature.

If you would like to discuss some of our findings, please contact us.

  • Austin

    3601 S. Congress Ave.
    Bldg. C302
    Austin, TX 78704
    United States
    View on Map
  • Cincinnati

    1311 Vine St
    Cincinnati, OH 45202
    United States
    View on Map
  • London

    14 Grays Inn Road
    WC1X 8HN
    United Kingdom
    View on Map
Map of Clearhead’s location in Austin, TX

Let’s talk.