Chapter 2: How to Tell a Good Poll from a Bad Poll

Ethical MarketsEthical Markets Originals

Chapter 2: How to Tell a Good Poll from a Bad Poll
By Alan F. Kay, PhD
© 2004, (fair use with attribution and copy to author)
June 16, 2004

So, you would like to spot the spin.  You want to know how to tell a good poll from a bad poll, to see examples of the confusion caused by misleading polls.  Well, why not?  We all need a little help to tell the good from the bad, and there is much more to be said about good and bad polling than is ever told us by the media –  newspapers, news magazines, radio, the Internet and TV.

After devoting 15 or 20 years of my life to non-profit polling in the public interest, I’ve never once been involved in the polling business commercially.  This book is about what I learned from the inside of U.S. politics.  You are about to become an expert in detecting bias – whether from politicians, bureaucrats, company public relations flacks, academics, media pundits or pollsters.  You will become a master at spotting spin-masters and their spin.

My involvement in public interest polling started long ago.  In 1946, I was a 20-year-old interpreter in the U.S. occupation forces, having learned Japanese in the Army at the University of Minnesota.  In Tokyo, I did “man on the street” interviews for Gen. Douglas MacArthur that provided him with feedback that contributed to the remarkable reconstruction of Japan.  In 1945, Japan was a destroyed totalitarian dictatorship.  In a few years, it became a free, democratic, market-loving country.  Do I believe in the value of good polling to help transform a country for the better?  You bet.

What is a good poll and what is a bad poll?  Some people take a narrow view.  Looking at a poll result in the paper or hearing one over the air, a good poll is a poll that pleases them.  They like it even better if a large majority agrees with them.  A bad poll is one whose findings they don’t like and they prefer not to know or think about.  We can do better!

In this book, a good poll is one where the sponsor (the one who pays the freight) is honestly trying to find out — and trying to let the public know — what the public itself thinks and wants, not what the public thinks and wants as explained by the pundits – those talking heads on TV who seem to know everything.  A public interest poll truly has the public’s interest at heart, rather than a poll commissioned by a public relations firm or anyone trying to push a particular product, cause or political viewpoint.

A bad poll is one where those paying for it want the public to believe that the public wants what those paying want.  It is not the case that in a given poll all the questions are either good or bad.  There may be many good questions in a bad poll.  Among the polls the public gets to see, there are very few really good polls.  Because of various checks and balances we’ll talk about later, most polls are pretty good.  But how can an ordinary person, like most of us are, tell the difference between polls that are really good, pretty good, bad and terrible?

Usually, really good polls are an honest and unbiased attempt to explore the public’s view, scientifically, and even if the pollsters are surprised or dismayed by the results, they publish them truthfully.  The worst polls try to confirm the biases of the organization or politician who’s paying for them, and seek to manipulate the public and engineer consent for their pet policies or to sell their products.

My small foundation, Americans Talk Issues (ATI), spent 14 years polling the American people – over the heads of the politicians and pollsters – to find out what policies they really wanted.  We published poll results in all of these issue areas: the national budget, national security, energy, trade, the environment, globalization and government reform, in the book Locating Consensus for Democracy, ATIF 1998.  Now, here in this handbook, I’ve culled out the most important insights that every citizen needs to know to make the U.S. democratic system work better for the people – rather than for the special interests.

This book will be your tour guide through the polling business and its clients and give you some easy ways to tell a good poll question from a bad one, a good poll from a bad one, and to distinguish the good guy sponsors from the bad guy sponsors.  Not all the ways are easy.  Some require more detective work.  Readers may enjoy the challenge.  If you get bored or confused, skim the section and go on to the next.  There will be no test.

We’ll see that people who sponsor polls and those who conduct them are pretty smart.  After all, those not making six figure incomes in the polling business, are not really trying.  They are generally savvy people.  Still, the baddies often leave tell-tale signs of poll-rigging, and I’ll show you how to spot them.  And then every once in a while, you’ll find out about a very big, bad guy who falls facedown on the floor and loses his place in the political game.

There are a few basic things that must be said about polling.  The first is that in a vast sea of polls, the public sees only the tips of the icebergs.  Looking only above the waterline, there are three kinds of polls that the public learns about.  The first are the polls that the news media put out.  As the editors of the New York Times and the Washington Post have told me, the stories newspapers cover are just those that their readers are interested in.  In other words, the mainstream media try to tell us that they conduct polls to satisfy their readers’ thirst for knowledge.  Is this what really happens?

Mainstream newspaper readers are tracked in the advertising, circulation and subscription departments, which have a lot to say to the editorial department about what their reader counts and earnings figures are telling them about reader interest.  It is not hard to see that this means that the media chooses to run polls that beef up stories, features, and news items in order to, well, help sell papers or advertising.

The editorial people don’t think that way.  They will be insulted if you tell them what I have just told you.  But they don’t see the big picture outside of their daily grind.  I have been scorned when I tried to explain to them years ago how it works.  But the bottom line is clear.  Poll findings do not get placed in print, radio, or TV in order to educate the news consumer.

I once asked Kathy Frankovic, head of polling for CBS, why media pollsters never ran polls on what the people thought about eras that were ending.  What lessons did we learn from seminal events like the end of the Vietnam War or the collapse of the Soviet Union?  The people paid a lot in energy, casualties and tax money in these long-term efforts.  Was it worth it?  Could a better outcome have been achieved in another way?  Kathy’s answer was, “We don’t do social research.”  What the mainstream news media think is that what the people themselves want is not news.  How the people react to what  political leaders want is their definition of news.  These are how many distortions and misperceptions arise in our political debates.

In big election years, media polling takes on another character.  Most of the polling budget for those years goes for covering the campaigns like horse-races or battles, full of war metaphors and brutal sport analogies.  It’s all about what the politicians want – in this case obvious – to be elected, and what the media want, a good, exciting story.  The two work together, hand in glove.

Elections are conducted and media poll questions pop up asking voters who they want elected. But we the people have little real choice.  The only candidates who can get national media attention have paid a high price for that rise to prominence.  In the end it’s: “Do you prefer Tweedledum or Tweedledee?”  This is not to say that sometimes one is not better than the other.  During the campaign, most voters see a big difference between Tweedledum and Tweedledee.  But once dum gets into office, the pressure is the same on him as it was on dee.  The public’s needs get the bum’s rush in dum‘s rush to please the top elites.

People who go into politics to do the right thing often get weeded out.  Their campaign funds dry up.  Most political newcomers convert to mastering the back-scratching techniques needed to stay in the game and rise to power.  They also master the art of convincingly explaining to a skeptical public that everything they do is in the public interest.  In fact, a weary and disillusioned U.S. electorate is beginning to believe that power-hungry, ego-driven politicians together with the money from special interest backers rarely serve the public interest.

When the public was asked in a poll question, 64% agreed that they “preferred that the politicians they vote for hold higher and more evolved moral and ethical values than they do,” and if that doesn’t impress you, keep in mind that most people have convinced themselves that their ethics suits them fine.

The second kind of poll that people get to see are those that are sponsored by policy organizations and foundations, some large regulars like Pew Charitable Trust, the Kaiser Family Foundation, the Democratic and Republican parties and AARP, and many less affluent non-profits that occasionally scrape up the money needed to do a poll. There are also a few polling organizations that conduct occasional polls designed in-house and conducted both for promotion of their name and as a public service.  Do these non-media polls have spin?  Yes, sometimes.

Finally, there are a few pollsters affiliated with universities, syndicators of poll findings, and independent non-profits, who truly poll in the public interest.  How can you tell how pure they are?  We’ll see how sponsors choose to fight spin, accept it, or depend upon it for their purposes.

Now, let’s look below the waterline at the extent of the icebergs and why there are so many of them.  Polling is a good business.  Top commercial pollsters earn six figure incomes cranking out polls that good customers want.   A good customer is one who will plunk down $100,000 or more for a typical poll.  For that kind of money, the customer gets a scientific random-sample of a thousand telephone or in-person interviews and, most important, the reliability and professionalism of the pollster who can assure them that the poll findings will be credible and satisfy their needs.

There is one kind of customer that regularly spends that kind of money.  Corporations, the great bulk of polling firms’ clients, do so much polling that they have been able to squeeze prices way down.  Still, commercial market research is a gigantic $40 billion per year business, most of which goes into surveying to find out what the public wants for products and services.

A chunk of that dollar is for unscientific, but much liked, studies using focus groups.  A facilitator guides a dozen or so people sitting around a table for a couple of hours discussing whether they would buy various versions of a new dog food, a different kind of life insurance plan, or an improved SUV.  Sometimes researchers look in through one-way glass or observe a screened replay.  The group is more-or-less randomly selected to represent the particular public segment that the sellers believe is a broad view of their potential market.  A good facilitator can ferret out what is behind individual preferences.  The information can be very valuable and so is considered proprietary by the companies to keep their competitors in the dark.

Of course, the findings of corporate-sponsored random sample polls are top secret too.  The public never sees the results.  Even though this kind of poll is the most numerous by far, the public hardly knows that such polls exist and are the bread-and-butter of commercial polling organizations.

Commercial polling is 98% market research and less than 2% political polling.  You’ll see the effects of the market research on political polling.  Marketeers are responsible for spending what adds up to a national $200 billion annual ad budget.  Large corporations selling directly to the public, like car manufacturers, fast food franchisers, brand name apparel designers, hospitality chains, jewelry chains, media mogul empires, etc., launch multimillion dollar ad campaigns and the money from their point of view is seldom wasted.  They have our number, don’t they?  We the public are scrutinized by direct mail firms, advertisers, TV ratings or Internet snoopers who “data mine” our purchase records, and place “cookies” on our computer hard-drives.  We are surrounded.

If you are a top pollster and most of your work is for marketeers, you give them what they want to know: how to describe their wares so that more people and more upscale people will buy them.  When commercial market research pollsters take over the job of satisfying political candidates, they naturally think in terms of selling them the way they sold corn flakes.  Package the politicians up like toothpaste and sell the public on them.  Don’t misunderstand.  Pollsters are smart, skilled, and know their business.  Manufacturers only want to sell what they make or might consider making. So, pollsters have to help marketeers find the right new “product” too: new issues that may be “hot” or putting the required spin on old issues or slanting emerging news stories.

Political pollsters and campaign strategists trying to find out how to sell their candidates face one key factor that is different from market research polling.  If requested, they have to stay with the client they agree to represent and hope to get him or her elected.  If they leave because the client isn’t selling (after all, at least half the candidates in a competitive race don’t get elected), no one will want to hire them again.  In the political field, there is no “manufacturer of candidates” who can make a new candidate during a campaign, if the old one isn’t selling.   When commercial pollsters accept the occasional political candidate or even when they specialize in political polling, as a few do, their mindset doesn’t change from the straight commercial one.

Another important thing to know about polling is this:  it’s all about numbers.   Are you are a number person?   Who is?  Rest easy. You are not alone.  The good news is that the only thing you need to know about numbers in polling is easier than learning how to make change for a dollar.  I never met anyone on the street who couldn’t do that – usually better than I can.  Come to think of it, seeing the significance of numbers in polling is a lot like counting change.  But first, basics.

We all know how to recognize blatant bias in polling with the proverbial “Have you stopped beating your wife” question.  Another wise maxim is “Ask a silly question and you’ll get a silly answer.”  Computers have taught us a similar maxim, “Garbage in, garbage out.”   But we can sharpen up the essence of the problem in polling that occurs even with the simplest “yes” or “no” questions.  In such questions, the verb is often “agree or disagree,” “approve or disapprove,” or the slightly more complicated “favor A or favor B.”  In such questions, there seem to be two choices, one or the other.  Even if the question is simply, “Do you favor A?” and no B is mentioned, there is a choice of “no.”  In this case, B is just “not A”.  Still, there is a small problem – with lots of consequences.  There is always a third choice and sometimes many more.

How often have you tried to respond to a telephone poll and found such yes/no choices absurd?  If you hung up, good for you.  You or the person taking the poll – called the respondent, “R,” can respond with, “I dunno,” or maybe not respond at all.  It happens.  The pollster doing the interview would ultimately of course give up and hang up, but more likely, the interviewer goes to the next question and often begins to get real answers from R.  Of course, R can say other more articulate things, like “I just don’t know,” or “I won’t/can’t answer that.”  These “no answer” or non-substantive replies are just lumped together in a “Don’t Know,” or DK, bin, whose contents in most survey reports are called the DKs.

Now, it is also possible for R to be more assertive, saying things like, ” I object to the biased way you have formulated this question.  I neither approve nor disapprove.”  Such an R is not very cooperative from the interviewer’s point of view.  But that is only a mild version of the problem.  For the full problem, imagine the question asks R to choose between favoring choice A or choice B – two more-or-less opposite choices.  The real problem then turns out to be the most cooperative and knowledgeable R, who might say something like this.

“You ask me if I favor A or B.  Well, here is my honest answer.  I favor A — under conditions X (skip the details of what X is.  R could take a few hundred words to do it justice!).  It gets worse.  R goes on. “I favor B – under conditions Y.  I favor neither A nor B under conditions Z, and I favor both A and B under conditions W.”  With X, Y, Z, and W running into hundreds of words each, our know-it-all R has just burned up five minutes, and the interviewer cannot use a word of it, because he can only point and click his CATI (Computer Assisted Telephone Interviewing) monitor for A or B or Don’t Know, the three “codes” that CATI has been pre-programmed to accept.

Look, I’ve listened to hundreds of interviews on a receive-only monitor phone and I have heard Rs who say much of this.  I have never heard anyone actually say all of this, but it is theoretically possible.  The point is, to respect the public’s intelligence, poll questions should embrace the widest possible range of choices in responding to questions not having yes/no choices – which are always coercive.

Now, the basics are over.  Here is where the numbers come in.  CATI keeps a running count of all those who have chosen each of the allowed choices – as we saw, at least three for every question.  When the survey is completed, CATI divides the counts by the number who were asked the question and gives us percentages, such as:

A: favored by 32%, B: favored by 64%, DK is 4% — which adds to 100% of those who were asked the question.

This is like putting change of a dollar into three piles: 32¢ in one pile, 64¢ in another, and 4¢ in the third.  It adds to a dollar.

Now, I promised you it would be simpler than making change.  Here is how.

When we pile up those pennies we have to get them right or someone will complain.  But in random-sample polling, there is a certain inaccuracy related to how many people completed the poll.  You don’t need to know about any of that.  The exact way pollsters calculate this statistical inaccuracy is used to keep people who don’t understand it from playing in the polling game.  Pollsters are very careful to handle this problem accurately and all you have to know is that the so-called sampling error in random sample polls allows you to not have to treat the numbers as sacred or very accurate.  Pollsters choose the sample size to be somewhere between 700 and a 1,000 (rarely as many as 2,000) and that turns out to mean mathematically that there may be an error but it is probably less than four pennies in the pile with 64 pennies, less than three pennies for the pile with 22 pennies, and one penny in the pile of 4 pennies.  It would hardly be any more accurate if the polling sample had 1,500 people who took the poll.   Larger sample size does not help that much.

An important thing to know is that even the worst polls do not muff this point, because it is the first thing that anyone looks at before they take a poll seriously.  The spin, as we will see, is much more obscure than cheating on sample size.

The key thing that the expensive poll reveals is those percentage numbers.  That is the main value produced from the $100,000 cost of the poll.  Yes, the percentage responses are not perfectly accurate, but accurate enough for political purposes, and no one is going to argue over them, except academics trying to make sure that PhD candidates know all about sample size, how to calculate it, and what it really means.  We will talk no more about sample size or statistical errors, except to say here how unimportant they may be.

A common situation is that two different policy proposals differ in their favorability rating by only one (yes, one) percentage point.  Then, it is also true that the more highly rated one is more likely to be favored than the less highly rated among the whole population, say all adults over 18, regardless of the sample size. Of course, the same is even more true if their difference is 2%, 3% or more.  With a difference above 3% and the sample size above about 700, then even the academics will usually agree that the difference is significant enough to assert it as if it were a fact.  That’s fine, but it misses a valuable aspect.  It is amazing how useful the idea that these small differences (as small as 1%) that admittedly are not definitive still can provide very helpful clues in systematic searching for the policies people most prefer.  The academics hate the idea, not only because it sidesteps all the highbrow mathematics they have mastered, but also, first, because it works in practical searches and, second, because understanding it just requires common sense.  No university can give out PhDs for common sense.

We need to perfect our democracy.  Ordinary people get paid for hard work.  We play by the rules, and we’re busy and involved in our own daily lives. We are a large quiet majority.  We do not have the time or energy to focus on the complexities of why and how we are not getting what we want and need from big government.  When we look closely at all the players, we begin to see that it is the System that forces all of us, including politicians and moguls, to play the roles that the System itself seems to require for us to play.  For now, getting the money out of politics is an important goal to help politicians to see how little attention they pay to what the overwhelming majority of us want and should have.  In time, other benign ways to transform the System will emerge and become apparent to those of us who think about these things.  Please join in and learn how to make your life more meaningful by helping others in ways that will help all of us, including you and every other person on the planet, get the governance we want, need and should have.

[Click here to read on into Chapter 3 …]