Online testing: The ultimate customer research

Reading Time: 5 minutes

I may have lied.

I was young. I had no money. I wanted to be cool. And to get some free food.

So, as a marketing intern, I “customized” my answers to their profiling questions to what I thought they were looking for.

There’s no harm done, after all, I thought. This behemoth video-game company can afford to include me in their focus group, even if I’m not really a hard-core gamer they’re hoping to hear from.

Plus, they’re offering free pizza and a video game. That’s, like, a $70 value!

So, I find my way to their boardroom with its long table surrounded by similarly pimpled kids. The huddle of marketers at one end eagerly scribble notes as we discuss game features.

I don’t remember what I said. I think I made a point or two that seemed to be what a hard-core gamer would say. Good enough to get my pizza and video game, anyway. But, probably useless for the video game producer.

Later in my marketing career, that tables were turned. I created focus groups to evaluate advertising messages for our ad agency. We traveled across the country to pull in small groups of target audiences and stood behind one-way windows saying things like “Aha” and “That’s interesting…”

I wondered how much we should act on that input from a couple dozen people. If one person makes a comment about our print ad, and no one in the room disagrees, does that mean they all feel the same way? And, would our entire target audience of hundreds of thousands agree?

Now I also wonder if they were as qualified to be there as I had been in my video-gaming days. Where they just there for the pizza?

The Hidden Risks of Qualitative Research

There are several reasons why using qualitative feedback alone can lead to misleading findings.

The Hawthorne Effect

The act of observing a thing changes that thing. When people know they’re being observed, they may be more motivated to complete the action. For example, knowing that you’re looking for usability problems in a user testing scenario, they may be more motivated to find problems whether or not they’re actually important.

Hawthorne Effect on usability testing
The Hawthorne Effect in action

Observer-Expectancy Effect

Researchers in one-on-one research studies can unintentionally influence participants and change the results. This is similar to the Hawthorne effect. It’s very difficult to avoid communicating with subtle verbal and nonverbal cues that direct the user.

Limited Sample Sizes

The feedback you get is only valid for the small number of people you’re testing. Although you can often gain valuable insights from a small number of users, you don’t know which ones are valuable. And doing qualitative testing, like usability studies, with large sample sizes is generally cost-prohibitive.

Small sample size misinterpretation
Don’t misinterpret small samples

Sampling Bias

The first of two selection biases, sampling or user-selection bias occurs when you have a mismatch between your actual customers and the criteria you use for selecting study participants. You may have an inaccurate view of your real customers or may skew toward the segments that are easier to attract into the study. For example, by studying me as a 14- to 24-year-old video-gamer the producer may miss out on the growing and lucrative 25- to 44-year old females, who will clearly have very different interaction styles and needs.

Self-Selection Bias

Self-selection bias is a significant problem when users volunteer to be in a study. This is the error I introduced as a young focus group participant. You’ll also never be able study people who don’t want to participate in studies. Look out for real customer motivations for giving feedback.

Preset Goals Creating an Artificial Scenario

In usability testing, the researcher sets predefined goals for the user to attempt to accomplish and then monitors their success or failure and points of difficulty. Probably the most fundamental limitation of this type of user testing is that the scenarios are artificial. The task you choose for the user may not be the task a typical user would choose. It’s also not likely to be relevant to the particular user you’re testing. In other words, you’re asking a person to imagine and act as if they were the type of person who wanted to accomplish the task you want them to. That’s asking a lot!

Motivation flaw of usability testing
Mis-matched motivations?

Limited Imagination

Qualitative feedback can tell you about possible miscommunications, interface problems and technical errors, but they usually don’t generate ideas about the layouts, content, and value proposition that would be more persuasive. Steve Jobs was once quoted in BusinessWeek saying, “It’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.” Your customers don’t know how you can motivate them, and they can’t tell you which landing page design will work best for them, either.

All of these errors and biases limit that value of qualitative feedback.

Is all Qualitative Testing Useless?

Focus groups, qualitative surveys, and usability testing can be useful for gathering feedback from your visitors. They can lead to valuable hypotheses to test. Traditional usability testing, for example, is valuable in exploring a variety of scenarios quickly, gaining immediate interactive feedback, and developing hypotheses that could not otherwise have been predicted.

But when used in isolation, these aren’t good tools for website decision-making. This is not conversion optimization.

The potential insights generated need to be validated through controlled testing. Blindly following qualitative findings without verifying using controlled A/B/n tests can lead to dangerous mistakes.

Conversion Optimization Requires Quantitative Testing

I still come across people regularly who say they’re doing website testing. When I probe into their methods, often they have run user testing or have customer panels who have opted-in to be surveyed. I’ll repeat: this is not conversion optimization!

By all means, incorporate qualitative methods into your marketing system. Then, use the input from them to generate better hypotheses for split testing.

The scientific method of marketing starts with formulating questions to ask of your visitors. If you can use these qualitative studies to help you develop better hypotheses, that’s great! That’s the role they should play.

Some of the most dangerous traps marketers face are jumping to conclusions without data, acting too early with limited data and misinterpreting data. A reliable scientific testing system avoids those traps and helps you make proven marketing improvements.

What do you think? Add your comments below.

Enjoy this post? Share with your friends and colleagues:

  • Great article Chris! I have discussions about exactly this topic with online usability companies and our clients on almost a daily basis. I always tell them we will need to test any work that comes out of a usability study (which often only has a sample size of around 5 people…..).

    A lot of companies here in New Zealand still only incorporate usability as part of a website build project allocating significant budgets to it and do not see CRO sitting in that same field (yet). CRO tends to come in after a site has been rebuild. How's that in the US? Most likely there is still a lot more education to be done here.



    • CRO is now an important strategy for many companies in the US, but there's still a lot of education needed.

      Many marketers still think of only user testing when you say "website testing." These errors and biases are not well-known.

      Thanks for helping with the industry education down there, Cornelius!

      • Thanks for the encouragement and your own ongoing commitment to educate the global market. I'll keep you posted on how things are developing here. We are the leading agency in NZ and soon to expand into Australia as well.

  • Great post Chris

    You have nicely illustrated the inherent problems with focus group and other qualitative research.

    As you point out, qualitative research is okay for generating ideas for testing but don’t count on them to produce very accurate results.

    That type of research will only come from testing real customers (or visitors) taking real actions in real settings.

  • Niko

    Great post!
    Qualitative research can be done effectively, but the focus should be entirely different than with quantitative approach, and it might not be suitable for baisc website usability testing (although, as you said, you might get priceless insights).

  • Loved this article! I agree that qualitative and quantitative research do not stand alone, but each need the other to build the most accurate big picture and direction for the future. I'm keeping this article on hand to share with my team!

    • Great! I'd love to hear your team's feedback, Rebecca.

  • Sikaar

    I totaly agree with this article and strongly recommend to read Clotaire's Rapaille Book : Culture Codes, dealing a lot with side effects described in the article.
    The interesting point is more how he get through these biais not using quantitative solution ( which can be properly applied only on the web but hardly for brick & mortar business) but rather a different approach of focus groups.

  • Pingback: Placement of social media icons | fraukeseewald()

  • Pingback: Use These 3 Points to Create an Awesome Value Proposition |()

  • Pingback: Use These 3 Points to Create an Awesome Value Proposition | Digital Marketing World()