Do your experiments launch without a hitch?
Website quality assurance for your online controlled experiments is preventative medicine for those nasty bugs that will cost your business.
It’s a crucial step in designing an experience to meet the emotional needs and states of your customer. It ensures an experience that is both functional and delightful.
Think of the lost revenue if those bugs on your website interfere with your desired action: your customer’s shopping cart might be full but payment options are glitching.
Your goals might not be registering by your experimentation tool, but your team might not notice for the first few days after your experiment’s launch. And the results would be invalidated forcing you to start at square one.
Poor quality assurance costs you.
You’ll find you are paying more to run an experiment, delaying your experiment’s launch or having to relaunch an experiment if your goals aren’t properly firing.
Not to mention the expensive catch-up your team will have to do in order to re-launch your experiments once its results are invalidated.
But perhaps more importantly, it can create a negative experience for those hard-to-acquire, first-time visitors.
Your design will be tested by users — your only choice is whether to run the test yourself before launch so that you can fix the inevitable problems while it’s cheap instead of playing expensive catch-up later.
Remember: when it comes to user experience, you won’t be forgiven for a bad impression. (A bad impression can be a lasting impression.)
And you’ll find that your visitors will bounce.
Instead of reacting to live bugs after your experiment is launched, which can drain resources and affect your visitor’s ability to convert, a website quality assurance process helps your experiments get through the production cycle as quickly and efficiently as possible.
If you haven’t established a defined website quality assurance process, now is the time. In this post, I’ll provide you with:
- A defined process for systematizing website quality assurance of your experiments to ensure program efficiency.
- A deeper understanding of how to optimize and track your internal processes so that your experts across disciplines work more effectively together.
- And a detailed checklist for QA’ing your next experiment so you can start reducing your development time and the cost of running experiments.
Why a standardized website quality assurance process matters
Website quality assurance is a proactive activity to ensure the health of your experimentation program.
When it comes to evaluating the impact of your experimentation program, you have to gauge not only the lift in revenue attributed to each winning variation, but you also have to factor in your own efficiency in launching an experiment to gauge your success.
Every experiment takes human resources—your development team, UI designers, optimization strategists, project managers—are all working hard to launch an experiment.
Whether that is a split A/B test or an experiment with multiple variations, your team’s ability to get an experiment through the production cycle can impact the business.
Unfortunately, bugs are a bottleneck to achieving experimentation velocity, organizational agility, and quality of the user experience.
According to Mike Loveridge, Head of Conversion Rate Optimization at Tsheets, you can implement program metrics and KPIs to help you understand how optimized your internal processes are in generating experiments.
Cost per test is a metric that I like to use. And also velocity; how long it takes to get tests through the process, and the speed through each stage—whether that’s copy, design, development, or QA.– Mike Loveridge
At WiderFunnel, we know that defined systems and processes help organizations of all levels scale their experimentation programs. A documented experimentation protocol helps you manage the program.
As Lauren Schuman, Director of Growth at MailChimp describes, a documented workflow enables the team to perform at the same high calibre across the organization:
“I spent a lot of time focused on workflow: tools and process. We’ve been moving so quickly and don’t always want to work on that piece, but everyone is aligned on its importance because that’s how you scale. So we’ve spent a lot of time documenting our workflow,” Lauren Schuman describes.
A key component to document in your experimentation protocol is your website quality assurance process.
That’s because it is the last step before your experiment is launched and visible to your visitors. To maintain the integrity of your user experience, you want to ensure all experiments go off without a hitch.
Evaluate the health of your experimentation program by tracking numerous KPIs and metrics in a set time period:
- The total hours logged on an experiment
- The total cost of developing an experiment
- The number of experiments that are launched
- The number of live bugs that your team found and resolved
- The average hours that your team spent resolving a live bug
You may have other operational metrics and KPIs that can help you understand where and how to optimize your processes, but metric setting requires that every person involved in your experimentation program understands and records their efforts for effective tracking.
Instead of working ad hoc on live bugs, spend the time to develop procedures for website quality assurance.
When your the website quality assurance process is running like clockwork, it means that your experimentation program can focus on activities that will generate more business impact.
Don’t remember, rely on a checklist
In his New Yorker article, “The Checklist,” Atul Gawande described how effective the checklist was in completing complex activities that are prone to human error.
To illustrate, he described the process of introducing a checklist into the operating room as a way of reducing line infections. At the time, 11% of patients experienced an infection due to a line.
There are five steps for doctors to complete when they are putting in a patient’s line. When nurses were employed to record the doctor’s steps with a checklist, they found that doctors missed at least one or two of the steps, despite being well-trained in the proper procedures.
When they formally implemented a checklist protocol, they found that infections reduced from 11% to almost non-existent. In addition to saving the lives associated with line infections, the hospital saved two million dollars.
While website quality assurance is not as high-stakes as healthcare, there is something to be said about implementing a checklist into your website quality assurance process. As Atul Gawande recalled, “Checklists established a higher standard of baseline performance.”
With the simple tool of a checklist, your team will rely less on their memory and more on following proper protocol.
It provides a record of this quality assurance to ensure that everyone who conducts a visual review of your site is performing at the same calibre—especially as your experimentation scales to be an organizational initiative.
One of the things about checklists is people misunderstand it to be a simple command and control tick box exercise.– Atul Gawande, in the Harvard Business Review article, “Using Checklists to Prevent Failure.”
While such regimentation of repetitive tasks can seem to undermine the knowledge and experience of your team, the checklist ensures that all areas of your site are checked against the control on every experiment variation.
Start now with our website quality assurance checklist!
Eventually, as you have various experiments affecting your website at the same time, a standardized checklist ensures that your experiments achieve the same rigorous QA process across every feature and every page of your funnel.
That means that as you scale your experimentation program, it is even more crucial to systemize your website quality assurance process.
Start with a basic checklist that covers your goals, the different requirements of an experiment (your experiment goals for tracking, your LIFT Analysis, your designs, etc.) and then gradually add to your checklist as you refine your process and evolve your optimization strategy.
You’re the user now.
Website quality assurance is a user-centric activity. Your experiments must be tested in preparation for your visitors and QA is the necessary procedure for maintaining a functional and delightful user experience.
When you soft launch an experiment for QA, you are assuming the role of the end user. When you put on that hat, you’re the user now.– Simon Cho, Front End Developer at WiderFunnel
It’s the important work that shouldn’t be noticed by the end user. It requires discipline, rather than technical know-how. It requires a rigorous process, rather than ad hoc checks—particularly when you are QAing at scale.
That’s because websites without proper website quality assurance can reduce the trust your visitor has in your site if your information is incorrect or if your forms are broken.
It can also cause frustration if your visitor can’t make a purchase—who wants that?
Unfortunately, there are some misconceptions when it comes to QA. For one, many people believe that the automated procedure of unit testing is interchangeable with website quality assurance.
In actual fact, unit testing is the developer’s way of testing the validity of their code through code itself; it’s an automated system of ensuring the website’s functionality.
Unit testing does not replace the need for quality assurance in either server-side or client-side experimentation.
That’s because nothing can replace a human’s perspective in evaluating the user experience.
“Human touch is required to see and feel if the UI/UX is good enough for users. We need humans to make sure a button positioning is right,” explains Riry Juliani in “5 Myths of Testing by a Software QA.”
“Automated testing can tell us if the coordinates are correct, but only human experience can tell us if the buttons are properly placed and displayed. Manual testing is essential for this human touch.”
When we are talking about website quality assurance, what we are really talking about is a visual check of your experiments’ user interface.
In your website quality assurance process, you want to check that:
- Your website is working properly: No bugs, glitches, broken forms, etc.
- And that the user experience has no points of friction: The desired action that you want your visitor to complete is clear and uncomplicated.
Even if your experiment is located in one area of a website, you need to check the entire user experience.
You always have to make sure that your experiments don’t affect other pages on the website.– Neil Lim, Optimization Coordinator at WiderFunnel.
With e-commerce websites displaying hundreds of products, a thorough website quality assurance process can be lengthy.
Neil recommends adding a list of your top 50 products to your QA checklist to ensure that you are not missing bugs that would have an impact on revenue.
If your website is focused on customer acquisition, you might also want to map out the customer journey to understand where issues could potentially occur.
Is there multiple steps in your funnel to complete the desired action? Write these processes out, step by step, so that your team is recreating the user experience in their quality assurance process.
Then, add them to your checklist to ensure this is a consistent practice across all the organization’s optimization efforts.
The step-by-step process of website quality assurance
Your optimization strategy is set and your experiments have been coded. It’s time for your QA process. Where do you start? And how do you make sure you aren’t missing a key component?
Here is a step-by-step process for conducting the website quality assurance of your online controlled experiments:
Prepare for your QA.
You have your experiments—every variation and the control—at hand ready to be reviewed. What else will you require?
First gather all your necessary documentation: your custom checklist, your LIFT analysis your interactive brief, your design files and any other internal documentation.
You must also have a designated workspace with all the necessary devices: a Mac and PC desktop, an iPhone and Android mobile phone, a tablet and an iPad. (If you don’t have all the necessary devices, you might be neglecting a large segment of your traffic.)
You’ll want to ensure that you have all the information readily available so you can evaluate the experiment variations against the control. You want to be able to quickly access your original strategy to ensure that there are nothing missed or any defects.
Once you have everything at hand and reviewed, a good practice is to walk through the experiment with your development team, so they can show you what was done and catch any quick fixes.
When you have all the information you need, it’s then the time to set up your “soft launch” environment.
Soft launch your experiment.
To QA, you want to “soft launch” the experiment so that you will experience the variations just as a user would in a live environment.
A soft launch environment allows you to fire click events and pageviews to ensure they are registering in your experimentation tool.
Your soft launch may require that you set audience parameters like your IP targeting, your test cookie, and query parameter so make sure you know the ins and out of soft launching on your experimentation tool.
“Familiarize yourself with your experimentation tool,” advises Neil Lim.
For example, when you open your experimentation tool to conduct your QA, you want to check:
- The pages – Make sure your experiments are targeting the right pages and/or URLs.
- The metrics – Ensure that your goals and events have been added.
- The integrations – Double check that your analytics tool is connected.
- The traffic allocation – Unlike live experiments, you want to allocate 100% of the traffic to the variation that you are checking.
- The audiences – Set up for a QA audience, rather than a live audience for a “soft launch.”
Once you have established a “soft launch” environment, first check that your goals and events are firing correctly.
Cross-device, cross-browser quality assurance
When you are conducting a QA on an experiment, you want to funnel your efforts down to the most specific segments in a way that checks the experience in all devices and browsers that the experiment can affect.
There is a three-step process for getting the most out of your QA efforts, starting with the evaluation of the overall experience and working your way to QA for even those edge cases.
If your experiment covers both mobile and desktop devices, we recommend following a three-step process:
- Overall experience: First, you want to check the experience on your Mac and PC desktop computers. When you QA at this stage, you are looking at the overall experience—how it looks and how it works—to catch any glaring issues. This QA will deal with any bugs that will affect the largest segment of your visitors.
- Cross-device QA: Next, you want to check your experiment how your experiment responds to different devices. Check your different variations against the control in your mobile phones and your tablet devices in both landscape and portrait screen formats in order to validate that your experiment is ready to launch.
- Cross-browser QA: Within your different devices, evaluate your experiments to ensure that your user will not experience any bugs or glitches when they use a different browser. Open up Chrome, Firefox, Safari, Internet Explorer, and Edge to see how your variations appear.
The process of website quality assurance should be rigorous in order to provide the same quality of user experience across all devices and browsers.
Don’t be fooled by an apparent edge case.
To be data-driven means challenging your assumptions, and in website quality assurance, you want to review your assumptions. If you’ve found a live bug, you have to correct it by reviewing the segments of your visitors that will be most affected.
One common assumption is to underestimate segments by relying on the percentage of users that use that particular browser, instead of the total number of visitors.
If 5% of your audience is using Internet Explorer or Edge, your team may decide that this segment’s experience is not as important as the other 95%, especially with the extra steps involved in optimizing for these browsers.
But when you consider the total traffic of your site, 5% may actually be a large number of visitors.
Minimize confusion by making this differentiation between percentages and the number of users affected when you communicate a live bug.
A quick note on page load times
To ensure both a functional and delightful experience, your team will need to review the visual appearance of your site. The first thing you should make note of is your page load times.
“Always compare the page load times between your control (which is your current live website) versus your variations. You want to control your experiments by understanding all the variables within the experiment,” Simon Cho recommends.
According to Google’s benchmarking research on mobile page load speeds, 53% of mobile visitors will leave the page if the load times are over 3 seconds.
Especially in client-side experimentation, you want to ensure that the page flicker is nearly undetectable to an untrained eye.
Make sure your experiment is looking good.
In the post, “A tactical guide to creating an emotional connection with your customers,” I explained Donald Norman’s three levels of an emotional design:
- The visceral level, which focuses on the look-and-feel of the design (the first impression);
- The behavioral level, which focuses on the pleasure derived from the design’s usability;
- And the reflective level, which contemplates the entire experience with the design in order to make a judgment.
These three elements also tie into an effective website quality assurance of the process, because you are assuming the user’s perspective. With fresh eyes, you want to ensure you are making the best first impression by checking the look-and-feel of the design. You’ll:
- Cross-reference your branding guidelines;
- Review the style and display, making sure that the experiment variations match the original wireframe design;
- Check all visual elements like videos (do they play?) and images (are they high-resolution?);
- And note that the font style and size is consistent on the website.
You will also need to read through the messaging and content of your website, ensuring:
- The accuracy of information on the pages;
- And the grammar and spelling of your copy and content.
Finally, check any responsive design elements, ensuring that when you scale up or down in viewport size that your user won’t experience any complications.
Test the functionality of your online controlled experiments.
The functionality of your site references Norman’s behavioral level. When it comes to your user’s behavior on the site, you want to make sure there are no points of unintended friction. You need to:
Make certain all your call-to-actions work as expected.
- Click on all your hyperlinks, internal links and anchor links to see that they scroll to the correct sections or point to the right destination;
- Correct any orphan or dead-end pages;
- Ensure your navigation is clear and in working order;
- Confirm that your forms work and that the information registers in your database accurately.
Once you have completed all the steps noted above, it’s time to communicate any bugs, errors or glitches to your team.
Safeguard against misfiring goals.
The purpose of website quality assurance is to make sure your experiment launches properly and that there are no issues that can potentially invalidate your results.– Neil Lim
One of the biggest issues in QA’ing experiments is misfiring goals and poorly tracked data. As you complete the cross-device, cross-browser QA, making sure that your site looks and acts in the way that you planned, make sure that your goals are firing appropriately.
Every time you check a page in your variation, make sure to take the action you are measuring as the conversion (clicks, visits, custom events) and check the results to see if the action has been recorded in your experimentation tool.
When you QA, you want to make sure that you are using a new incognito window or tab to ensure that you are clearing your cookies and cache. This means that you are acting as a new user every time you open a variation and can check that your goals are firing.
You can also go to the “inspect element” function on the page, to double check that these are properly connected to your experiments.
At this stage, it makes sense to also double check your integrations. Don’t neglect to see if your website activity is registering in your analytics tool.
Communicate efficiently when you have a bug.
When you are experiencing a live bug, your team might be all-hands on deck to get it resolved.
Your development team may be reviewing their code, your project managers may be documenting the issue, and your strategists may be trying to understand the impact of your bug on the user experience.
All the necessary information needs to be communicated clearly and efficiently with internal stakeholders. With an urgent live bug on the website, you need to balance the stress your team will be experiencing with the need to follow a standardized procedure.
That’s why it is important to ensure your team follows the communication procedure at all times so that when there is a crisis, your team won’t have to loop back to get the necessary information. Your communication process will be business as usual.
Your key to clear communication in QA
When you encounter a bug in your website quality assurance, you’ll want to answer these five questions in your communication with your team:
- What is the bug?
- How can I find the issue?
- What environment was the issue found?
- What is the severity of the bug?
- What does the bug look like?
Provide a clear description of the bug, including what the user will see or experience on your website, and if the bug is live or not.
In general, anyone reading the description of the bug should understand what the bug is and why it’s a bug before reviewing any of the following questions.
Replicate the process you underwent to find the bug to guide your developer to the area on the website. You want them to easily find the issue by following your instructions step by step.
For instance, when you were QA’ing on an IPhone X, you flipped the screen to landscape orientation, and then the menu dropdown is opening by default 100% of the time when it should not be.
Note the device and browser you were using when you uncovered the bug. For instance, you were using a IPad pro on the Chrome browser in landscape format.
Determine the business impact of your bug to help guide the prioritization process just as you would an experiment hypothesis. You want to make sure you are evaluating the severity of the bug based on its potential to affect conversions, the importance of the area where the bug is located, and the ease of which it can be resolved.
If your bug is live, meaning users are likely to be experiencing the issue, you’ll need to start resolving the issue immediately. You may need to allocate more resources to those more complicated fixes that can affect your revenue than to the simple stylistic fixes in your funnel.
For example, if your font is two points smaller than your brand guidelines dictate, this may be less important to fix than a bug on your check-out cart that is preventing your user from where your product’s sizes and colors are supposed to be listed.
Document the bug visually using screenshots or video if it deals with usability or animation.
As well, communicate in the way that your development team will understand. If your bug is affecting the color of a design, make sure to include the hex code from your branding guidelines to make the resolution process as straightforward as possible.
A standardized process for communicating bugs helps to create a historical document where you can understand the areas on your website that are prone to bugs. As well, it can provide a record of issues so that you can recreate the process of resolving these issues.
Rounds of QA
Because your website quality assurance process is preventative medicine for those nasty bugs, you may have several rounds of QA before the experiment launches. At this stage, you will need to loop back to your development team and communicate all your initial findings.
“There’s a lot of moving parts involved in experimentation. In order to stay agile and maintain velocity, it’s critical that cross-functional teams utilize their time efficiently,” articulates Aziza Farah, Optimization Coordinator at WiderFunnel.
“The best way to minimize the number of rounds required in your QA process is to communicate effectively. Be thorough with your bug descriptions and provide clear visuals.
“Aim to communicate all of the necessary information at one time so that your communication (and your development team’s effort) isn’t staggered.”
If it is not possible to sit face-to-face and walk through your QA findings, make sure that you have included all the necessary documentation when you pass along the bugs so that no time is wasted.
When your bugs are resolved in the development process, there is always the potential that the code will break something else on your website. You will need to conduct another QA round in order to ensure everything is ready for the user to experience.
Don’t take shortcuts when it comes to UX.
According to Donald Norman, the user’s final experience of your website is at a reflective level: Was the experience delightful and frictionless? Was the brand trustworthy of my business?
That’s why there are no shortcuts when it comes to creating delightful user experiences.
The only way to ensure that your experiments go off without a hitch is a systemized process for website quality assurance.
You know you have a good QA team when their job is going unnoticed by your executives and your users.– Simon Cho
By systemizing the complex procedure of website quality assurance using a defined process and a custom checklist, you will enable teams across your organization to conduct website quality assurance with the same rigorousness.
What are your pain points in standardizing a website quality assurance process? We’d love to hear your thoughts in the comment section.
6 mistakes that will derail your testing program (and how to not make them)
In this free 70-page guide, we describe 6 of the worst mistakes optimization teams make in their design of experiments. And we show you the right approach to take.Get your guide now