Our favorite ‘A-ha!’ moments from the past year
‘A-ha!’ moments are what optimizers live for.
These are the moments that make you jump up and down with excitement.
‘A-ha!’ moments are the insights that lead to more substantial revenue lift and profitable growth for your company.
To kickstart the New Year, we wanted to share some of our favorite ‘A-ha!’ moments of the past year.
It seemed only fitting that we begin this series with a test that reminded us just how important testing actually is.
This case reminded us that, when it comes to increasing conversions, there is huge potential in personalization.
And that ignoring personalization can be costly.
A case of mis-used space
This past year, we were working with Reserve Direct, an online vacation and travel website that helps visitors book hotels and in-destination activities.
We focused on optimizing their two most popular destination sites: Branson, MO and Orlando, FL. During our initial analysis (also known as the Explore phase), we noticed that users were not engaging with the social media icons located in the site-wide header.
These icons seemed to add very little value to the user experience, yet they occupied some of this client’s prime website real estate. So, we decided to ditch the icons in favor of more valuable social proof and credibility elements.
We created two variations of the header:
Variation A focused on social proof and featured a Facebook-esque widget that displayed the number of ‘Likes’ of the client’s Facebook pages.
And variation B highlighted the client’s credibility, featuring a review graphic that showcased high user ratings and satisfaction.
Both variations outperformed the original control: the Branson site received an 8.25% increase in completed orders and the Orlando site received a 12.18% increase.
The fascinating thing about this test, however, was that the winning variations were different for each site.
An unexpected insight
We originally assumed that, because both sites used the same design template, we could treat visitors to each site as the same (i.e. results from one site could be implemented without testing on the other site).
We were wrong.
Variation B (the credibility variation) won on the Branson site, while the social proof variation showed a -10.9% decrease.
Inversely, the social proof variation won on the Orlando site, while variation B made no positive or negative impact.
We weren’t expecting to see such a difference in response from the visitors on the two sites, and neither was the client when we first began. It really showed the value of testing: if we had only tested on one site, we could have negatively impacted the other.– Nick So, Optimization Strategist
This ‘A-ha’ moment reminded us that testing always trumps assumptions. It also highlighted the many possibilities around personalizing marketing efforts for specific visitor segments.
Were there demographic differences between visitors to the Branson site and those to the Orlando site? Why were visitors to the Orlando site influenced by social proof while their counter-parts to the Branson site responded negatively to the same social proof?
Questions for future Explore sessions.
This post is first in a 5-part series. Stay tuned for more ‘A-ha!’ moments in the coming days.
Discover how your experimentation program stacks up!
Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.Get started