Online testing: The ultimate customer research

5 min. read  | Last updated: June 1st, 2016
Reading Time: 5 minutes

I may have lied.

I was young. I had no money. I wanted to be cool. And to get some free food.

So, as a marketing intern, I “customized” my answers to their profiling questions to what I thought they were looking for.

There’s no harm done, after all, I thought. This behemoth video-game company can afford to include me in their focus group, even if I’m not really a hard-core gamer they’re hoping to hear from.

Plus, they’re offering free pizza and a video game. That’s, like, a $70 value!

So, I find my way to their boardroom with its long table surrounded by similarly pimpled kids. The huddle of marketers at one end eagerly scribble notes as we discuss game features.

I don’t remember what I said. I think I made a point or two that seemed to be what a hard-core gamer would say. Good enough to get my pizza and video game, anyway. But, probably useless for the video game producer.

Later in my marketing career, that tables were turned. I created focus groups to evaluate advertising messages for our ad agency. We traveled across the country to pull in small groups of target audiences and stood behind one-way windows saying things like “Aha” and “That’s interesting…”

I wondered how much we should act on that input from a couple dozen people. If one person makes a comment about our print ad, and no one in the room disagrees, does that mean they all feel the same way? And, would our entire target audience of hundreds of thousands agree?

Now I also wonder if they were as qualified to be there as I had been in my video-gaming days. Where they just there for the pizza?

The Hidden Risks of Qualitative Research

There are several reasons why using qualitative feedback alone can lead to misleading findings.

The Hawthorne Effect

The act of observing a thing changes that thing. When people know they’re being observed, they may be more motivated to complete the action. For example, knowing that you’re looking for usability problems in a user testing scenario, they may be more motivated to find problems whether or not they’re actually important.

Hawthorne Effect on usability testing
The Hawthorne Effect in action

Observer-Expectancy Effect

Researchers in one-on-one research studies can unintentionally influence participants and change the results. This is similar to the Hawthorne effect. It’s very difficult to avoid communicating with subtle verbal and nonverbal cues that direct the user.

Limited Sample Sizes

The feedback you get is only valid for the small number of people you’re testing. Although you can often gain valuable insights from a small number of users, you don’t know which ones are valuable. And doing qualitative testing, like usability studies, with large sample sizes is generally cost-prohibitive.

Small sample size misinterpretation
Don’t misinterpret small samples

Sampling Bias

The first of two selection biases, sampling or user-selection bias occurs when you have a mismatch between your actual customers and the criteria you use for selecting study participants. You may have an inaccurate view of your real customers or may skew toward the segments that are easier to attract into the study. For example, by studying me as a 14- to 24-year-old video-gamer the producer may miss out on the growing and lucrative 25- to 44-year old females, who will clearly have very different interaction styles and needs.

Self-Selection Bias

Self-selection bias is a significant problem when users volunteer to be in a study. This is the error I introduced as a young focus group participant. You’ll also never be able study people who don’t want to participate in studies. Look out for real customer motivations for giving feedback.

Preset Goals Creating an Artificial Scenario

In usability testing, the researcher sets predefined goals for the user to attempt to accomplish and then monitors their success or failure and points of difficulty. Probably the most fundamental limitation of this type of user testing is that the scenarios are artificial. The task you choose for the user may not be the task a typical user would choose. It’s also not likely to be relevant to the particular user you’re testing. In other words, you’re asking a person to imagine and act as if they were the type of person who wanted to accomplish the task you want them to. That’s asking a lot!

Motivation flaw of usability testing
Mis-matched motivations?

Limited Imagination

Qualitative feedback can tell you about possible miscommunications, interface problems and technical errors, but they usually don’t generate ideas about the layouts, content, and value proposition that would be more persuasive. Steve Jobs was once quoted in BusinessWeek saying, “It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” Your customers don’t know how you can motivate them, and they can’t tell you which landing page design will work best for them, either.

All of these errors and biases limit that value of qualitative feedback.

Is all Qualitative Testing Useless?

Focus groups, qualitative surveys, and usability testing can be useful for gathering feedback from your visitors. They can lead to valuable hypotheses to test. Traditional usability testing, for example, is valuable in exploring a variety of scenarios quickly, gaining immediate interactive feedback, and developing hypotheses that could not otherwise have been predicted.

But when used in isolation, these aren’t good tools for website decision-making. This is not conversion optimization.

The potential insights generated need to be validated through controlled testing. Blindly following qualitative findings without verifying using controlled A/B/n tests can lead to dangerous mistakes.

Conversion Optimization Requires Quantitative Testing

I still come across people regularly who say they’re doing website testing. When I probe into their methods, often they have run user testing or have customer panels who have opted-in to be surveyed. I’ll repeat: this is not conversion optimization!

By all means, incorporate qualitative methods into your marketing system. Then, use the input from them to generate better hypotheses for split testing.

The scientific method of marketing starts with formulating questions to ask of your visitors. If you can use these qualitative studies to help you develop better hypotheses, that’s great! That’s the role they should play.

Some of the most dangerous traps marketers face are jumping to conclusions without data, acting too early with limited data and misinterpreting data. A reliable scientific testing system avoids those traps and helps you make proven marketing improvements.

What do you think? Add your comments below.

Author

Chris Goward

Founder & CEO

Grab your complete copy of the new “State of Experimentation Maturity 2018” research report

What makes some organizations so successful when it comes to experimentation? This new 45-page report provides benchmarks for stages of experimentation maturity at leading North American brands.

Get Report