Mobile marketing optimization Q&A

7 min. read | Last updated: March 28th, 2018

Chris Goward was the keynote speaker at the [email protected] Summit on October 20th 2014. His presentation on Mobile Marketing Optimization spurred a great Question and Answer period.

Here is how Chris defined mobile optimization during his presentation:

A lot of people think mobile is the difference between a desktop and a phone. Some people – if you are really advanced – are looking at tablets. This is not the reality. This is not mobile. The reality is there is an infinite number of devices now, and infinite number of screen sizes and iterations. And, they are changing all the time.

Stop thinking of mobile as a phone versus a desktop. Mobile is a state of being – it is a context.

Mobile is a verb – it is not a device.

Q&A recap

The transcript of the Q&A session is below:

Q: You have talked about having a team to do mobile optimization. But we have limited resources and in our case we just can’t have that many people working on it. Do you have any suggestions on what to focus on, and what to leave apart for a while?

A: You are right. Conversion rate optimization is a multi-disciplinary system, and it is very rare to find one or two people that have the skill to do all of it. You need to understand marketing, the customer, design, user experience, wireframing, and a lot of technical tasks as well. A lot of times the technical stuff – such as how to speed up a page, or how to implement a test are hugely important. Look at what you have internally, do you have designers, coders? Maybe pull a portion of their time away to focus on that.

You really need a conversion champion, who understands the process for continuous testing and building that knowledge base. That’s the most important part. And that’s what we have done. We have created conversion teams that have all the components – designers, coders, strategists and account managers. They are like outsource teams for companies.
Q: Should you re-test variations you used 6 or 12 months ago?

Yes, we do this on an annual basis. We will do testing for 12 months, than we will go back to last year and re-test our original winner. This validates that all the improvements throughout the year have been cumulative. And when we look at this, and do the calculation – for the past 24 months – they have been.

This is a really good idea to do. At least, if nothing more than to silence the critics who say, “ Oh come on, 24% lift, is that actually happening.” And it is – it works.  But also there are seasonality nuances that you want to take into account. Especially when you have a seasonal business. We are finding during peak urgency periods the winners can change dramatically versus during lower periods; because peoples internal urgency is different, and their response to different messages changes.

So going back to test is worth doing. But give yourself some time and run through a bunch of tests, and then go back and validate. And if nothing else, this can be used as a tech check to make sure your statistical significance is working out.
Q: Do you have any kind of tips for dealing with HIPPO’s (Highest Paid Person’s Opinion) who don’t want a test to run to completion, or don’t want to wait for accurate results?

A: So you are digging into one of the biggest problems. Organizational change is one of the biggest barriers to conversion rate optimization – especially with senior marketers, who have never taken this approach, because it is a completely different way of thinking.  Most marketers grew up in the gut feeling, intuition, school of marketing where you find an insight and run with it. This is a different way of doing it – it is actually a merge of data and intuition. And so ya, you will have people watching the tests who are impatient. They want to just finish it – saying, “Why are you wasting your time with all this statistical mumbo jumbo?” The first thing you want to do is identify who are going to be barriers. Identify who are the champions and the supporters. Run some under the radar tests with the supporters, without the detractors knowing. Secret tests. And get some momentum that way. So you’ve got some wins, you have some valid lifts – now you have proof that this is a way that can genuinely lead to improvement. So you might not start on the homepage, for example. And then, shop that around and share the results.

Actually there are a whole bunch of tactics – I have a blog post on ‘Nine Strategies for Becoming the Marketing Optimization Champion Your Company Can’t Live Without’. Do the lunches, start the skunk-works tests, create momentum, do the shopping around of your results, and build support. You’ve got to get one beach head in senior management that’s a supporter. And then they can fight your battles at the C-Level, otherwise ya, you are out manned.
Q: The problem for us is collecting enough sample size. You showed us all these tests and iterating and iterating again. If we were to do the entire cycle and actually measure actual revenue improvement, rather than the micro conversion improvement, it would take us about 10 months. So any tips for people that struggle to get enough sample size to get a statistically significant results?

A: Traffic is the biggest barrier to conversion optimization – it is true. And there are few work arounds. But – there are principles for lower traffic tests:

  1. Run more dramatic changes. This doesn’t necessarily mean dramatic design changes, instead run dramatic cognitive changes – for example it might be value proposition that is quite a bit different.
  2. Run fewer variation. Run away from multivariate and only run two or three variations.
  3. Test on your high traffic pages with only a few variations. Then take those insights and apply them to your lower traffic pages.
  4. Become comfortable running longer tests. There is nothing wrong with having tests in market that go longer.

Q: Could you talk a little on the things that you would not test. At our office, everyone has ideas on what to test and we keep adding to the list.  We have a testing road map for the next ten years!

A: You have an underlying questions there. You want to test everything, and you have total support for testing, but you’ve got road map for ten years – that means you don’t have the traffic to support your desire to test. And really, that’s unskilled testing. Skilled testing is looking at the traffic you’ve got, prioritizing what are the insights you want to gain from it, and testing what’s important.

Use the PIE Framework. This is a framework we use for prioritizing our tests to prioritize by potential importance and ease. By doing this you will always be testing the most important questions for your company. If you generate insights and conversion LIFT, then it doesn’t matter how much you have because you will only be testing what is important.

Q:  Do you have a key take away for B2B online testing?

A: The goal is really important in B2B.  You may be converting leads, and you need to look at quality of leads. We do a lot of work for Magento Enterprise. Not on the e-commerce platform, but trying to get people to sign up. And so, whenever we are testing we want to track those leads through quality at the call center to find out if the test generates more revenue or is it just creating more volume of leads. If you can, looking at tracking phone numbers that go into the call center to go straight through to opportunity and revenue value.

Also, B2B usually has lower traffic, so all the same principles of lower traffic testing as well.
Another tip is staying away from interesting, but not revenue generating, micro conversions.

Q: Do you have any recommendations for testing tools for native apps?

A: There are a whole bunch of testing tools now, and there are several that are pretty good. We have been using Optimizely a lot. It is pretty decent in features, and does a good job. Visual Website Optimizers is good as well.  AB Tasty, in France, is a really good tool too. Worth checking out. Google has content experiments – it’s okay. There are some other higher priced ones – Monetate, Maxymiser, Adobe, SiteSpect. There is a whole bunch, but those are the ones that are easier to get into. We always combine testing tools with the analytics – whether it is google analytics or adobe. You want to make sure you are doing the back end analysis too.

Q: You were talking about how important segmentation is. If you for example get an A/B test result that your returning customers convert better than your new customer. Then what’s your next step?

A: Returning customers convert better than new customers – there is really nothing there you can work with. But if you run a test and then you see that returning customers convert better on variation B, and new customers convert better on variation C. Now you have a potential insight about your customers. The segments. And now you can start to drill into why: why do they convert better – even slightly – on the other one. You have a potential insight you can build on, create a hypothesis, and create a test that is targeted at that potential insight.

After this you can build that hypothesis into a finding, which becomes a theory, which becomes something you can use predictively. And that’s when it gets really powerful.

For more information on segmentation, I have a blog post called ‘8 Steps to Amazing Website Segmentation Success.’
Q: You have talked about iteration a lot. How many tests would you run in parallel? For example on an ecommerce site at different levels of the cart.

A: How many tests can you run in parallel? It is depends on the traffic. You can run a lot of tests as long as you have the technical chops to keep the segments separate. Visitors will be pre-segmented before they get into the test, and then within the segments we will run parallel paths. But that’s for an ecommerce site with a lot of traffic. So we are talking about half million monthly is when you can start to get into that. With less than that then you usually want to stay at one or two parallel segments. You have got to be very careful if you are doing that to make sure there is not cross pollution of tests, that all of the cells are completely separate, and that you don’t have any tests intermingling – because if they aren’t separate you will get really screwed up results.


Chris Goward

Founder & CEO

Discover how your experimentation program stacks up!

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started