A/B testing is a lot more complex than most marketers let on, and that’s why we decided to host a practical webinar outlining 2 Strategies to Increase Your A/B Testing Win Rate. This webinar is aimed to give you some helpful strategies you can use in your day-to-day experimentation program.
Many of our webinar topics are inspired by popular articles our strategy team writes about on our blog to provide some context. In this case, our post on The Pitfalls of A/B Testing, written by Widefunnel’s Director of Strategy, James Flory, served as our inspo. If you haven’t had a chance to take a look, we highly recommend it.
Here is a summary of the topics he covers:
- Not planning your optimization roadmap
- Testing too many elements together
- Ignoring statistical significance
- Using unbalanced traffic
- Failing to follow an iterative process
- Failing to consider external factors
The webinar, brought to you by James Flory and Sr. Experimentation Strategist Alex Mason,
focused on points #2 and #5. For the sake of simplicity, I’ll focus on point #2: Testing too many elements together.
Testing too many elements together
Remember when you used to play video games like Street Fighter or Mortal Kombat? And then randomly mashed the controller buttons hoping to create the best moves to win the fight? You might have actually won, but you likely have no idea how you won and would be hard-pressed to repeat the process.
Well, the same process can happen with experiments.
“The more precise you are when testing elements, the clearer and more insightful your results will be.”
– James Flory, Director of Experimentation Strategy, Widerfunnel
Keep in mind that experimentation can help:
- Mitigate risk by testing what works before going on all in
- Improve the overall customer experience
- Drive and generate customer insights that can lead to growth
But what sits at the heart of experimentation, and is critical to its success, is the ability to draw out causal or correlated inferences to make informed business decisions. If you can’t come to a conclusion based on what you’ve observed, then we’re not setting out what we accomplished to do.
This is why testing too many elements can stifle your results, as you are not able to clearly say: “this is what moved the needle.”
Often this outcome occurs because we get anxious, and we want to create extreme changes expediently. Sometimes we want to shake things up as much as possible to get results. The problem with this is that it will muddy the waters and make it much harder later to understand what worked and what didn’t.
Watch as Alex explains the problem in this example scenario:
If we can’t change too many variables at once, what can we do?
Here’s some good news: a few helpful strategies will allow you to make changes and clearly understand what happened.
#1 Fractional Factorial Design
This methodology can be tricky to explain without visuals, so I’ve included a video below. Watch as James walks us through a real example from our work with The Motley Fool:
Isolating The Winning Variation
Think of a restaurant serving combinations of meals. Let’s assume chicken and rice is a staple dish on the menu, and now they want to test other variations.
- Chicken and Mashed Potatoes
- Beef and Mashed Potatoes
- Beef and Salad
- Fish and Salad
- Fish and Rice
The restaurant wants to understand which dish they should offer based on customer feedback.
Over time they received feedback that patrons enjoyed the dishes most that included mash potatoes as a side. To take it a step further, they also received feedback that the fish performed better than to the other proteins. We can therefore deduce that the mash potatoes and the fish are having a positive impact on customers.
The next logical step is to take the two winning elements and test them together—fish and mashed potatoes.
“A good losing test is just an ingredient for a winning test.”
– James Flory
One of the advantages of fractional factorial design is that we can use logic to identify positive impacts with negative results. Even if the dishes performed poorly, you can still isolate that fish and mashed potatoes positively affect reviews. This result now becomes your guiding light, and you can use these insights for future tests.
Building On Learnings
In the restaurant example, fish and mashed potatoes may have been the ingredients selected for a winning dish, but we could only reach that conclusion by testing other dishes first. The lesson here is that you should stack the learnings from the results of each test to increase your odds of finding winning experiences in further tests.
Need a tool to manage your test results, collect learnings & insights, and collaborate with your team?
Scale your experimentation program with Liftmap
In summary, Fractional Factorial design can help you draw insights by making incremental changes between a series of variations. Remember that running multiple variations requires more traffic because you will need to split traffic between variations to reach statistical significance. Not everyone has that level of traffic.
For those with traffic limitations, here are three strategies you can leverage:
#2 Test in key areas of the customer journey
This approach is about designing a very clear test— the changes are isolated and in an area where there is high visibility for impact. Select the right spot in the customer journey is key to maximize the impact of the change.
Here’s another example by Alex, using a test we ran with an energy company. Notice how we are focusing on is the button copy (key isolation), and it’s at a very impactful point in the journey: the page where a visitor chooses their energy plan while completing their order.
If this test is a winner, you’re definitely going to notice if it generated more orders. Also, the only change you made was the button copy, so it’s clear what caused the changes.
Need help deciding where in your journey to test? Read more on the PIE framework
#3 Try bolder changes
For this one, it’s go hard or go home! This approach is about doing something noticeably different. It will be evident that the results are attributed to the change employed. Examples may include experimenting with your lead generation process, placement of offers and promotions, or you can focus on bold pricing or business model adjustments. A bold change doesn’t necessarily need to be big. You can still make bold moves with small changes.
A great example of bold change is our work with a well-known ink subscription business out of the US. Typically, they would have someone sign up on the page, and then they would direct them to a page where they could choose their plan.
Now, let’s build an experiment with bold changes.
Take a look at the two screenshots below. They may not seem that different, but it’s a bold move in terms of changing their business model.
The difference is that all new customers in this variation get one FREE month of this service. This would cause a significant impact on the company’s business model, but if it increases overall subscription sign-ups enough to offset this increased cost, it may be worth it in the long run.
So how did it perform?
What a surprise—queue sad trombone! We offered a free month, and fewer people signed up? This result is not what you would expect, right?
You would expect that “free” offers to lead to more engagement, but in this case, we are incurring the impact of the higher costs of the offer and getting fewer sign-ups overall.
This may seem like a “fail” story, but this is highly valuable information to have learned. We can say with certainty that “1 month free” has a negative impact for on-page sign-ups, despite seeming counter-intuitive. Also, keep in mind, this was NOT deployed without testing first, so we saved this supplier a lot of money by avoiding this scenario on a larger scale.
It was a significant risk mitigation tactic, but it also gave insight into what customers want and helped guide the business in their next direction. This example demonstrates how bold changes can give clear insights into what worked, and just as importantly, what did not work.
#4 Group changes under one hypothesis or theme
If you don’t have the resources or the traffic to create a test using the strategies already mentioned, there’s another strategy you can pull out of your back pocket.
Again, the goal is here to effectively cluster changes together while still being able to draw clear insights.
So another way to do this is by leveraging a tried and tested framework (created many years ago by Widerfunnel). It really helps you put yourself in the shoes of your customer and look at an experience objectively.
Watch this video where Alex explains the LIFT Model and how it works.
As Alex explains in the video, there are 6 factors within the LIFT model you can use to cluster changes:
- Value Proposition
The idea is to choose one of these factors and focus on the types of changes that fit that category.
We have an example here via our work we weBoost, a company that offers cell phone signal boosters. In this example, we focused on changes that fall under “Clarity”.
Focusing on improving clarity, we made the following changes:
- Clarified the value proposition by condensing a paragraph into three short bullet points
- Clarified the value of the product by adding reviews and testimonials
- Clarified the pricing
- Clarified how to take action as a next step
So by categorizing all these modifications under one theme, we can still gain valuable insights despite making several changes.
Let’s run the test.
As you can see, this test performed amazingly well, with a 27% increase in sales. Clarifying why visitors should buy this product was a highly effective driver of conversions. By understanding the value of clarity, this insight can be applied elsewhere on the site, further driving improvements across the organization. “Learn and iterate” is our mantra here.
Subscribe to get experimentation insights straight to your inbox.