Ah, the major site redesign. It always starts with such confidence and great intentions. You’re off with more enthusiasm than you’ve had since, perhaps, the last redesign.
“We’re finally going to get rid of this ugly, old experience everyone hates.”
“We’re finally going to rip the band-aid off and fix this thing once and for all.”
“We’re finally going to make our site modern and mobile friendly.”
“We’re finally going to implement that new platform that will solve so many of our problems.”
And then it all goes wrong.
As a validation and optimization agency, Clearhead works with a lot of awesome clients across a wide range of consumer-facing industries. As such, we’ve had a front row seat for many redesigns since we launched three years ago. Note that I intentionally use the term “front row” to describe our vantage point—optimization programs almost always get put on hold while the redesign is in progress.
“We’re doing a redesign right now, so there’s no use in testing and collecting data on the old site. It’s all going to change.”
Along the way, we’ve noticed a number of common traits in how this type of project is typically executed, and many common problems that emerge as a result. My goal with this post is to share some of these observations, then outline a process for data-driven redesigns that Clearhead developed to fix this broken strategy.
Common Traits of Today’s Redesigns
Redesigns are often led by solutions, not problems
Rather than an effort oriented around solving user issues, we’ve seen most redesigns driven by a desire for change and new solutions.
Four of the most common reasons we’ve heard for embarking on a redesign are:
- “Our site looks old and stale. We’ve had it for years. It’s time for an update.”
- “Our competitors just added a bunch of cool new features. We need to keep up.”
- “We are re-platforming, so we might as well redesign our site while we’re at it.”
- “Everyone has a responsive site these days. Going responsive is a no brainer.”
Not one of these has the user’s best interest at heart. In the excitement to usher in new solutions, we’ve observed that organizations rarely take the time to identify the problems they do and do not have with the old experience.
The design process is led by stakeholders, not users
The process typically goes something like this:
- Excited internal stakeholders get their redesign budgets approved and immediately embark on an RFP to find a creative agency.
- Once chosen, the new agency interviews stakeholders and begins documenting goals and themes for the new design.
- The agency then creates personas, concepts and treatments for the new design strategy based primarily on the input of stakeholders.
- These concepts are then pitched, Mad Men style, back to stakeholders.
- Stakeholders give feedback and multiple comp options are created, typically involving a bit of mixing and matching from each of the pitched concepts.
- An iterative loop of stakeholder feedback and design updates begins.
- After much hand-wringing and uncertainty, a final version of the design is ultimately approved by stakeholders, typically several days or weeks later than was planned.
- And tada, we now have the final designs that every end user will soon be forced to interact with. “We hope they like it.”
Don Draper is still alive and well in the digital design processes of today.
Redesigns are launched to 100% of users at once
New designs are still released like they are physical products or software distributed on a piece of plastic. It is manufactured behind the scenes, and then one day it shows up in its finished form, hopefully with some amount of fanfare.
While it may generate internal excitement, imagine what this is like for users. Your best customers, the ones who love your existing product, find themselves shocked, awed and often angry. You dramatically overhauled their site and you didn’t even have the decency to ask their opinion along the way.
Everyone wants the big exciting reveal, yet the consequences can be severe. So why do we do it? Couldn’t we build just as much buzz with private betas and where select users are invited behind the curtain and then get to brag about it? I think we can.
Common Problems That Emerge
The closer to launch you get, the less confident you become
For a project that begins with so much confidence and enthusiasm, they sure do end with a lot of anxiety and doubt. Oftentimes stakeholders are literally scared to launch. And at that point all you can do is cross your fingers and hope the users like it.
New designs are launched and metrics fall
In every redesign we’ve witnessed that rolled out to 100% of users at once, the new design underperformed the control. Some have most certainly recovered, and some haven’t.
So why does this happen? A few key reasons we see:
- The new design did not actually solve real problems your customers were having with the old design.
- The old site had been optimized and iteratively improved upon for years, and the new design broke something that was working well.
- And yes, the shock, awe and learning curve that just got dropped on every single user of your site without warning certainly plays a part.
On this last point, we often hear that it’s to be expected and not something that can easily be controlled. But is it? Is there really no process we can follow for creating and launching new designs that would help us avoid this situation?
Instead of joy and celebration, the launch of a major redesign is far too often followed by panic.
“Did we break something that was working?”
“Is it because our existing customers hate the change?”
“They’ll get used to it, right?”
All of these are completely valid questions. But it’s a really awkward time to be asking them. Instead of knowing you’re in a better place and iterating from a stronger foundation, you’re consumed with triage, looking for quick fixes and wondering if things should be rolled back.
It’s worth noting that this post-launch process of redesigning the redesign is actually the first time that real data from real users starts to lead the way. It’s used to identify actual problems users are having with the new design, and these problems drive focused ideas for updates to solve them. Then those updates are made and the data is scrupulously reviewed to validate if the “fix” worked. As a result, this is typically the most productive and effective period in the redesign process.
If only it was done this way from the beginning.
Introducing the Data-Driven Redesign
It starts by identifying the problem you do and do NOT have & validating them with users
The only way metrics are going to improve after a redesign is if you solve real problems with your current design, without breaking the things you are currently doing right. Otherwise, you are simply introducing change for change’s sake. And change is risky.
Before you get excited about a shiny new site, spend time with your team, your data, your customer service and most importantly your customers to identify actual issues. While you’re at it, spend some time identifying what’s working well too.
Then prioritize the list of problems while making explicit notes on what you must not break in the process. For good measure, run this final list by your customers one more time. You now have a compass for the new design.
The output: A stakeholder and user-approved list of prioritized and validated problems you will attempt to solve in your new design.
Next we develop focused hypotheses to solve the problems we’ve identified & validate those with users
With a clear understanding of the real problems you do (and don’t) have, you can now develop actionable ideas for solving them. The key here is to focus every new feature and every new design concept on solving the problems you want to fix, without breaking anything you know is working. Everything else needs to get thrown out.
Also remember that, at this stage, it’s only an idea for a solution. It’s not actually a solution until you’ve proven it works.
Once stakeholders have approved your detailed and thoughtful list of ideas for solving these problems, pitch your users again. Get their feedback and iterate as needed until they are happy.
The output: A stakeholder and user-approved list of validated ideas to solve your validated problems.
Then we design those solutions & validate them with users
Now it’s time to get creative, but in service of executing on your focused list of ideas. As comps are created and reviewed, they should be viewed through the lens of “do we think this is the right design for the solution” as well as “do we think this design is attractive.”
Of course you want both. Of course the new design needs to serve the aesthetic of the brand. But, equally important, it needs to serve the solution.
Once stakeholders have approved these comps, pitch your users again, get their feedback and iterate as needed until they are happy.
The output: Stakeholder and user-approved design comps.
Then we build prototypes & validate them with users
Before you have developers build out the final implementation of your new design, create clickable prototypes and see how your users interact with it. This can save you a lot of time, money and agony in the long run. Gather feedback from these user tests to iterate your way to the final product.
The output: A final implementation of the new design that has been tested and validated by users.
And finally we roll out gradually while measuring, learning and iterating
Why launch at 100% when you can do this with much less risk and still obtain the same benefit?
Let’s face it, the days of the big reveal are numbered. Facebook, Google and the other best-of-breed digital companies have moved away from the practice of pushing new designs and features to 100% of their users.
Follow their lead and roll out your new design gradually. Start with just enough users that you can collect real and meaningful data on the impact of your changes. Choose them randomly so you don’t bias your data, but then let them know what’s happening and give them a chance to opt-out. This can be done with a simple message:
“As a valued user of our site, you’ve been selected to see our new design. If you wouldn’t mind using it, we’d greatly appreciate the opportunity to learn from your experience. If you do, no worries, click here to go back to our previous design. And if you like what you see, tell your friends about it!”
Just measuring the number of people who choose to opt-out of your new experience will be telling. Those that don’t will give you invaluable insights into how your new design will perform when it’s out in the world for all to see.
Learn from this real data and iterate as needed. Only roll out more broadly when you get the outcome you want with a significant subset of your users.
The output: The majority of your users experiencing a new design that solves real problems. And it looks great too!
The new design is live and things are better than they were before—now start optimizing!
By the time you roll out your new design to 100%, you know you are in a better place. Otherwise, you wouldn’t have rolled it out. Instead of panic and triage, you’re ready to optimize.
The real work begins now!
You know what they say about history repeating itself
After I gave a talk on this subject at Opticon, someone asked me why I thought this redesign cycle keeps repeating itself. It was a great question that got me thinking.
My hypothesis is it’s because failed redesigns are embarrassing experiences that nobody wants to talk about. And because nobody talks about them, nobody else learns from the mistakes. As they say, if we don’t learn from history, it’s destined to repeat itself.
What do you think? We’d love to hear your redesign stories and the lessons you’ve learned.
Thank you for reading,
Ryan Garner is the co-founder of Clearhead. He oversees optimization programs for all of Clearhead’s amazing clients. When he’s not busy testing or raising his two young sons, his favorite hobby, like so many in Austin, is talking about how awesome Austin is.