5 Lessons App Marketers Can Learn From Political Pollsters 

essidsolutions

Over the years, political pollsters have made many mistakes and adapted their measurement techniques. Like political pollsters, user app marketers make mistakes when measuring their user acquisition campaigns. In this article, Mike Ng, chief revenue officer, Digital Turbine Media, discusses a few lessons app marketers can learn from political pollsters.

App marketers, like political pollsters, are keenly interested in incrementality — those individual stimuli that can lead up to a vote or install. They know that certain channels, like social media, will deliver a certain number of installs because they always have, just as pollsters know that certain states will lean towards a certain party.

And yet, we also know that it’s easy to miss the mark in political polling. States swing in unexpected ways; others are much closer than expected. Over the years, political pollsters have made spectacular mistakes, but those errors prompted them to analyze what went wrong and adapt their measurement techniques going forward.

Learn More: Calling All Mobile Marketers: Beat App Fatigue With These 3 Savvy Strategies

 User acquisition (UA) campaign managers are making similar mistakes, but they are fixable, and they can look at political polling to learn a few things:

Lesson #1: Know the Big Picture

Up until the middle of last decade, pollsters called people on their landlines, and this “sample” was thought to be representative of our collective intent. To the pollster’s great embarrassment, however, the results of those polls were skewed. Many young people don’t have landlines, others vote early, and many more vote by mail. Cell phone users and mail-in voters turned out to be critical voting blocs, and not adequately adjusting methodology proved troublesome for pollsters accurately predicting outcomes/turnout.

In terms of UA campaign outcomes, most marketing managers look at results using standard metrics: day 1, 3, and 7 activity. That’s akin to polling via one channel — landlines — only. If you evaluate an outcome based on those metrics only, you’ll miss the channels that effectively deliver new users, but over longer timeframes, say 30 or 60 days. But unlike political campaigns, which reveal a definitive winner on election day, it’s easy to miss the ultimate outcome of a UA campaign. You need to look for it outside of your standard metrics (more on that below).

Lesson #2: Measure the Right Thing

Just because someone says they favor a particular candidate, it doesn’t mean they’ll actually go out on election day to vote. And while respondents may say they favor a particular candidate, savvy pollsters will dig deeper, asking which issues are most important to respondents and how they rate each candidate in terms of those issues. In other words, pollsters have learned the hard way that there are a variety of actions and attitudes that impact or can change voting behavior.

The same is true with UA campaigns. There are multiple behaviors or interactions that lead to a specific outcome. If you rely on just the end install and attribute it to one specific event, you will draw incomplete conclusions.

Lesson #3: Update Your Benchmarks

Technology changes things. The more different things become, the more drastically your testing and benchmarks are altered. Just as pollsters needed to update their metrics — asking about yard signs, campaign donations, volunteering with a phone bank — to determine the likelihood of a respondent actually voting, app marketers need to do the same.

All marketers have a formula for connecting ad campaigns to the lifetime value of users: if this ad gets X number of placements, it should deliver X% CTR, which will lead to X number of installs, ultimately leading to an LTV of X, achieved at a CPI of $X.XX.

What these assumptions imply is the customer journey is largely the same: awareness, consideration, purchase, or in the case of apps, see-try-use/play. But that’s a faulty assumption. While that model may be true for “paid” advertising, it’s less so in the world of “discovery-centric UA”, which has its own customer journey and therefore requires a different set of metrics and benchmarks.

Lesson #4: Differentiate the Mediums

This past year citizens voted in record numbers, largely because mail-in voting was deemed safer and easier and gave people a longer window to vote. A change in the medium, in this case, had a huge impact on the election.

In UA campaigns, the ad unit drives the conversion journey; a banner ad will ignite a very different customer journey than a pre-load. This means each method demands a different approach to measurement and optimization. For instance, playable ads generate more clicks and engagements, but what does that mean for the overall install rate or LTV?

We know that abandonment rates on mobile phones can be as high as 80%, an indication that there is a lot of friction in installing apps from the app store. Preloads eliminate that friction, but millions may not even see the app on their devices for 30 days or more … is that bad? What’s the best timeframe for judging the efficacy of various ad units?

The only way to answer these questions is to design tests around the ad, which brings us to the final lesson.

Learn More: How Ecommerce and Mobile Marketplaces Are Driving the Digital Economy

Lesson #5: Incrementality Is All About Testing New Things . . . and Testing Your Tests

Incrementality boils down to being smart about testing new things and accepting that there are new and different metrics that will be required.

The hardest part for marketers is often abandoning their tried and true metrics. But the whole point of innovation is to invent something new, and all new things require a different set of metrics and benchmarks. Anything less is not incrementality; it’s just a mirage.

Pollsters learned this the hard way and suffered a severe blow to their reputation in the process. The inherent biases of UA may be outside of the public view, but it doesn’t mean that the urgency to address is lessened.