Featured in: FMQB Programming To Win
Date: March 3, 2017

online-surveyAs a researcher, this past election was fascinating and sobering. We at Paragon have seen first-hand how reaching a random sample has become more and more difficult. A random sample is where everyone has the same chance of being sampled as everyone else.With the proliferation of cell phones and the deluge of telemarketing, the old tried-and-true method of randomly calling telephone numbers became problematic…not to mention expensive. According Larry Gerston, professor emeritus of political science at San Jose State, “It used to be that 1 out of 3 people would answer. Then 1 out of 10,” he said. “Now it’s fewer than that.” New models of reaching households with no listed phone and/or cell phones need to be refined and perfected.

My academic background was in political, public opinion research. The beauty (or horror) of public opinion research is that there is a day of reckoning: the election. The infamous “Dewey beats Truman” headline in 1948 had a modern day resurrection this year with the election of Trump. It’s been fascinating to hear the pollsters explain why they called the election wrong.

Yes, there is a margin of error; e.g. that five times out of 100, the results will fall outside the margin of error, and the margin of error usually ranges from plus or minus two to four percent depending on the number of people sampled. National elections since 2000 have been extremely close. In some respects it’s impressive that pollsters have done as well as they have given the razor-thin margins of victory. I’ve even heard certain pollsters say that this year they didn’t really get it wrong given the margin of error. That’s a hard sell as Trump is sworn in, especially since almost every major poll called the outcome incorrectly.

Although Clinton won the popular vote, the Electoral College handed the election to Trump. That means that polling must be accurate within each state of consequence. More states were more consequential than the pundit/pollster class envisioned.

And, oh, one other thing about the most recent Presidential race polling: If people don’t want to tell you what they’re going to do, you can’t poll them. The best example of this was the California gubernatorial election in 1982, now referred to as “The Bradley Effect.” Tom Bradley, the mayor of Los Angeles and an African-American, was favored by at least 5%. He ended up narrowing losing the race when the absentee ballots were counted. So the results were well outside the margin of error. The Bradley Effect posits that because those people who wouldn’t vote for a Black man wouldn’t tell a live person on the phone their racist proclivities, the polling was inaccurate. Nonetheless, today too many Americans are hurting; America apparently needed a shakeup. However, surveys done by mail more often better reflect election results than telephone surveys when people feel their deep-seated fears and/or prejudices may be revealed.

Back to the difficulty of obtaining a random sample in this age of cell phones and telemarketing: You have to have enough people inonline-mobile-survey your sample to be reliable; reliable means that if you were to do the poll all over again; the results would fall within the margin of error 95 times out of 100. Political polls “weight” for factors like likelihood of voting, age, gender, race, and education/social class as well as by state. In each one of those factors, one really needs to have a reliable number of people from which to make observations. At Paragon when we draw a sample, we look at age, gender, race, and geographic location. We have an aversion to weighting. In the few times we do weight, we at Paragon make sure there are a reliable number of people in each factor “cell” being analyzed. No fewer than 30 and preferably closer to 50 people should be in each cell.

Too often we’ve seen other research companies weight up cells because of economic necessity; i.e., it costs too damn much to do it right. There have been egregious uses of weighting. We can only wonder if this cost cutting added to inaccuracy of 2016 Presidential polling predictions. Over the past two decades, Paragon has pioneered a combination of on-line and telephone samples. We’ve run tests to establish that there is not significant statistical difference between the online and telephone methodologies.

Unlike elections, the moment of truth for radio research comes not with an election but with the issuance of ratings currency. We at Paragon have been highly critical of the Nielsen Portable People Meter (PPM) sample. Nielsen simply does not have the number of people sampled from which to make reliable observations in even the broadest of categories. We’ve seen core audience constituencies change radically from month to month (that’s not reliable) and wild swings in results (e.g. Weekdays 10a-3p among Women 25-54) when nothing has changed in the market. So we have an unreliable yardstick by which to measure the effectiveness of any research.

The only true way of obtaining a random sample is to randomly pick out households within census blocks and do everything humanly possible to have that household participate in the survey. The National Institute of Health (NIH) does this. First they see if the household has a telephone (listed, unlisted, and/or cell).  If that doesn’t work, they send a letter. As a last resort, they’ll knock on the door and try to do an in-person interview. Nielsen’s predecessor, Arbitron, was exploring this methodology, and Nielsen professes to use some semblance of this sampling.

NIH’s method is extremely costly. This is a cost that most radio operators cannot shoulder. So we receive the expedient sampling Nielsen now provides. This kind of sampling becomes more disturbing in a fragmented media landscape where the audience has a myriad of platforms from which to receive their audio programming.

Radio operators need to know what their target publics want and think about their programming. Again, with comparing methodologies internally at Paragon, we feel that we’re providing clients with quality research from which they can reliably implement programming and marketing decisions.

See FMQB article here.