When you think of ‘conversion optimization’, what’s the first thing that pops into your head?
- Testing buttons? Other design elements?
- Using persuasion techniques and psychological triggers?
- Landing page optimization?
- Yes! I want more clicks!
- Yes! I want more money!!
- WHAT THE HECK IS CONVERSION OPTIMIZATION!?
Okay, okay, we could do this for a while. Here’s the point…
When marketers talk about conversion optimization, we tend to talk about all of the factors that go into creating an experiment (and the expected return on investment, of course).
And, this makes sense. We’re all about online elements that you can see, read, engage with and how these elements motivate (or un-motivate) our users to click, follow and purchase.
There’s an often-ignored aspect to conversion optimization that can cut your testing velocity in half, or worse!
I’m talking about what goes on behind the scenes: the development side of an experiment. That phase where an idea, a wireframe, a design brief is transformed into a live experiment. It’s an unknown, and possibly scary, process for many marketers. Who really knows what those developers are up to anyway, right?
This integral part of conversion optimization doesn’t get much attention. But it’s so important. If your development process isn’t smooth, launching an experiment can take forever. You can get caught up in never-ending QA, which delays your experiment, which delays your results, which is frustrating to no end.
So, in this post, we’re revealing some of the most common technical difficulties associated with conversion optimization and how your team can conquer them.
Don’t let the following seven technical barriers kill your optimization program!
1. The Flicker
Probably the most common error in conversion rate optimization is known as the flicker or Flash of Original Content (FOOC).
An example of FOOC. This is not how you want to be a/b Testing.
I’ll offer a brief overview here, but if you really want to dig into FOOC, read “11 ways to stop FOOCing up your A/B tests”.
When you’re a/b testing, you’re injecting code, meaning you’re making changes to a page in a browser once that page is loaded. You’re allowing the page to load its original content and then you’re adding script that changes that content into something else. If you haven’t accounted for FOOC, your visitors may be seeing that flicker or flash of original content before they see the proper variation.
There are several ways to combat FOOC. One is to ensure that the snippet for whatever testing tool you’re using is in the head of the document, preferably at the top, just after the metatags. Most people put this snippet in the body, but don’t make this mistake! This placement means that everything else loads before the snippet is detected.
The aforementioned post outlines 10 other ways to conquer conversion optimization-induced FOOC: if you’re struggling with flicker, it’s worth a serious look!
2. Troubled tracking
Without proper goal tracking, your a/b test may be DOA. Skewed tracking can result in a losing variation being declared a winner, or a winning variation being declared a loser. Goal tracking will make or break an experiment: an incorrect setup can lead to wasted developer time and traffic allocation.
There are two types of goals: visit goals and click goals. The former is assigned to a page URL, the latter to an interaction with a page. You want to make sure that you’re tracking the right goal at the right time. For example, if you want to track engagement with a call-to-action that directs visitors to a new page, you want to track how many people are actually clicking, rather than the page URL itself.
When a visitor clicks something, a message (also known as an API call) is sent to your testing tool saying ‘hey, this got clicked’. However, if there is a page redirect occurring at the same time as the click, the redirect will override the tracking of the click. If you don’t allow enough time for the click goal to register before you send your visitor to a new page, you could lose that click.
What’s more, depending on a visitor’s computer speed or the browser they’re using, their click may be tracked, but this data in its entirety will be skewed. You’ll see clicks coming in, but you may only be capturing a fraction of the actual clicks.
The solution here is pretty simple: you’ve gotta allow time before you redirect the page. Standard practice at WiderFunnel is to count one second after the click goal has been sent before redirecting. This prevents the cancellation of the API call.
3. Confused selection
A selector is used to apply rules to certain elements within the document object model (DOM). The DOM contains the building blocks that tell a browser how to read HTML: it’s a whole bunch of text that forms the structure of a website.
Say you’re running an experiment on multiple pages. You’re trying to change the background color to blue on a certain area of a certain page. But, you find that on another page, where you don’t want the blue background, the blue background is appearing. They’re totally different areas, but they’re probably using the same selector.
It’s always a good idea to add prefixes to your selectors to 1) make them unique and 2) distinguish them from the selectors that belong to the source code. We add a handy ‘wf’ prefix to ours, to keep everything clear.
4. Garbage code quality
Good code is really important in testing in general. Because you’re injecting new code into an existing, fully-formed website, you have to make sure that whatever changes you make are as minimal as possible.
If your code is not on point, you risk slowing your site’s loading speed and creating extra bugs.
To avoid messing with your site’s load speed:
1) Cache your variables. Rather than asking the DOM to access a variable or selector multiple times, you can link these instances. This way, you’re only querying the DOM once vs. multiple times, which can substantially speed up load times.
2) Load your resources in sprites. According to our dev team, this is just good standard practice, but I thought I’d cover all the bases. Loading in a sprite means you’re loading multiple images one time into the DOM, as opposed to loading each image individually. If you did the latter, the DOM would have to load each image separately, slowing load time.
To avoid creating extra bugs on your site:
Make sure that the code you’re injecting doesn’t rely too heavily on the source code. If you bind your variation code too tightly to your source code, you could be creating cases that don’t exist in the source code. If you don’t take precautions, you might be sending something to the backend that could come back as an error because the source code hasn’t had to deal with that situation before.
5. Responsive confusion
If your site is responsive (as is most likely the case) and you decide to run a site-wide test (as often happens), you must keep an eye on dynamic elements.
When there are dynamic elements on a site — like descriptions on a product page that can range in length — you will run into trouble if you don’t consider formatting. You can’t just build for that product description element on a single page, because there’s a chance that it’s totally different on another page. It might be three words long in one place and paragraphs long in another; you have to make sure that these differences won’t lead to a wonky display.
The best way to tackle this one is through due-diligence; this issue is really an issue because it’s easy to overlook. One of our developers, Thomas Davis, recommends switching pages every so often when you’re coding a variation (if the experiment is site-wide, that is). Build an element to work on multiple pages: start on one product page, continue on another product page. This way, you can ensure that the formatting doesn’t break from page to page.
6. Cross-browser code (aka The Internet Explorer conundrum)
As simple as it seems, it’s really easy to overlook cross-browser compliance when you’re building a test. This can be a chore, particularly when you’re working with Internet Explorer.
*For an informative discussion on why Internet Explorer is such a thorn in most developers’ side, check out this Reddit thread.
When you’re busy and under pressure (to say, launch an experiment quickly), it can be easy to forget about testing your code across multiple browsers and devices. A variation can look completely normal in Chrome, but display incorrectly in IE.
You’ve gotta put in the time, as with ensuring your code is responsive, to be diligent about cross-browser testing. We test in all browsers and we test on both Mac and PC, all before our tests go to QA to ensure a seamless experience for our clients’ visitors. It’s important to account for all of the options, otherwise you risk revealing what’s behind the curtain (and that never ends well).
7. What’s the ‘Why’?
In order to get good at conversion optimization, you need to know why you’re doing it.– Tony Ta, Lead Developer, WiderFunnel
This last one is less of a technical mistake, and more a cautionary suggestion.
Let’s say that your development resources are limited. Your organization relies on one development team for internal maintenance and improvement. They’re very busy and their time is expensive (those specialized skills, and all).
You’ve decided to venture into conversion optimization because, if all of the work your dev team is doing isn’t proven to increase conversions on your site…what’s the point?
But, your devs aren’t in the loop. Maybe they have some familiarity with CRO, but maybe they don’t. They’re not a part of your marketing team, their job is to code, to fix, to problem solve in a particular way. And now they’re coding experiments that they don’t really understand.
Don’t go this route. It’s particularly important to your optimization process that you let your developers in on the ‘why’ behind what you’re doing. It can be tough to take the time to explain, particularly if both departments are slammed (as is often the case).
But, without the ‘why’, mistakes will be made and problems will be missed.
A dev needs to know the end goal, the big picture as to why they are doing what they are doing. This allows them to see solutions that others may not see.– Tabish Khan, Front End Developer, WiderFunnel
When I first sat down to write this post, I sent a survey to our development team. I asked a bunch of questions about why conversion optimization is so difficult from a programming perspective.
Not only did I learn about the common technical difficulties outlined above, I also learned a little bit about what makes our team so great at what they do.
One word: focus. Conversion optimization coding isn’t simple. Our development team is good at what they do because it’s all they do.
We’ve done this 1,000 times. We know what to do when we face an issue. We understand conversion rate optimization. We know every tool and all of the hidden features. We know how to make a variation work without breaking the existing code.– Thomas Davis, Super Confident Front End Developer, WiderFunnel
If you’re considering CRO as a strategy, consider your development team. Consider their bandwidth, their focus, their time. If you want to implement a rapid and successful testing program, assemble a team that can give your strategy their full attention. Or bring in a partner with a specialized team at the ready.
For more information on how to create and implement a successful conversion rate optimization strategy, download the free guide below.
Discover how your experimentation program stacks up!
Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.Get started