lft-arrow

Monday April 21st 2014

40 years of foresight, insight and integrity

rght-arrow

Archives

Subscribe

Get news updates
via email:

Delivered by FeedBurner

Chapter 8: Making Good Use of Numerical and Commonly-Used Word Scales

Chapter 8: Making Good Use of  Numerical and Commonly-Used Word Scales
By Alan F. Kay, PhD
© 2004, (fair use with attribution and copy to author)
July 26, 2004                    

If a pollster asks, “On a scale of one to 10, how do you feel about Bill Gates?”, he gets some lively answers, like:   

    “He is great!  Give him a 10.”

    “I hate what the Internet has done to our business.  He’s a one in my book.”

    “I’m not involved in any of that stuff.  I’m neutral.  Give him a five.

A scale, like “one-to-10″ brings out the great variety of people’s reactions better than the conventional “either-or” way to ask that question:

    “Do you like or dislike Bill Gates, chairman of Microsoft?”

Unfortunately, though it rolls off the tongue, the one-to-10 scale is not one of the better scales, as we will see.  

Can scales and other different kinds of response choices that pollsters use put a spin on the poll?  You bet they can.  Let’s first see how scales came to be used, what’s good and bad about scales, and how they produce spin.

When people in survey interviews are asked a typical either-or question, as we have seen, the lack of choices often produces large DKs.  But there is something else going on.  In a question with only two choices, “agree” or “disagree,” people sometimes do not feel strongly about the underlying issue.  How they do feel can best be described this way.  Flatly saying either “agree” or “disagree,” they feel, would overstate their position.  This moves them to opt for their only remaining choice, DK.  Pollsters have moved to accommodate such respondents by expanding their substantive choices from two to four along a “strongly/somewhat” dimension, like this:  (1) strongly agree, (2) somewhat agree, (3) somewhat disagree, and (4) strongly disagree. DK then becomes choice (5).  This takes care of the needs of those who only somewhat agree/disagree, and that generally includes virtually all respondents.  Very few feel strongly on every agree/disagree question in a survey with many such questions. 

This five-point scale also adds a whole new usefulness to the public’s response to a poll question.  It allows pollsters to measure what in polling lingo is called “salience.”  If two policies that are alternative ways to deal with an issue both have large support, say, 80%, pollsters sometimes find amazingly big differences in the strength of that support, as much as 60% strongly and 20% somewhat flipping to 20% strongly and 60% somewhat.  A policy with 60% strong support is said to be more salient than one with 20% strong support.

When respondents really feel neutral about an issue, they would prefer to have a “neutral” choice explicitly allowed.  Why is this important?  There are some poll questions where “neutral,” when offered, captures the plurality, even a majority.

An example from ATI#18 is:

Q24.  “Should social security taxes be cut, increased or kept the same?”

Response: Cut 14%; Increased 20%; Kept the same 64%.

Those responding “cut” or “increased,” were asked, “By how many billions of dollars?” 

The four-point scale (not counting DK), when augmented with a “neutral” option is improved and becomes a five-point scale (one to five) with the neutral point at three, clearly in the middle.  Some respondents respond to a policy or issue question, by thinking, “On this one I’m in the middle,” and they’ll choose “three” because of that.

Scales with an even number of points, like 10, do not have a middle value to represent neutral, while odd scales do.  As a courtesy for the respondents, to anchor the middle value as the neutral point, odd scales (like five- or seven-point scales) are desirable. 

A further step in this direction is illustrated by the following question introduction:  The interviewer reads off the CATI screen these words, “On a scale of one to seven, where one means “very opposed”, four is “neutral” and seven is “very favorable”, how would you rate [a specific policy]?”  This seven-point scale was defined or “anchored” only at three points, the two extremes and the neutral point.  People have no trouble understanding and using seven-point scales anchored at only three points, and even those anchored at only the two extreme points, as the following example illustrates:

With the next larger odd-point scales (nine or 11), it becomes unclear that the public can be making useful distinctions.  Is there any real difference on a nine- or 11-point scale between the attitude of the person who chooses six  from the one who chooses seven?  Coming from many questions asked both ways, the answer is “very little.”  Tests have also shown that responses are hardly changed when a question is asked first with a seven-point numerical scale and then in another survey with the same question but only a two-point, favor or oppose, response scale.  It turns out that the percent of persons choosing “five, six or seven” in the first case will be close to the percent choosing “favor” in the second.  In practice, the two different scales do not lead to much different results.  Polls with large scales create some data that verges on the useless or misleading.  The one-to-10 scale has this problem and that’s a second reason that it is not so good. 

In any “for or against” question, including “yes/no,” “favor/oppose,” “agree/disagree,” “approve/disapprove,” “do/do not support,” it is better to use a seven-point scale running from minus-three to plus-three, where zero is neutral, minus-three is very opposed and plus-three is very favorable (thereby anchored at three points).  Zero is a natural anchor for neutrals.  Negative numbers are appropriate for people who are negative on the question and positive numbers are appropriate for people who are positive on the question.  Do negative numbers bother people?  Not really.  Today, people have no more trouble with negative numbers than they have with dealing with weather forecasts of 14 below zero in Minneapolis, at least if they are in Florida watching it all on TV.  Beyond that, there is a definite plus for the zero-centered, seven-point scale, where no number greater than three need ever be mentioned.  Dealing with a scale that is “as simple as 1, 2, 3″ is better than one that is a little harder for the innumerate.  Who has ever heard someone say, “It’s as simple as 1, 2, 3, 4, 5, 6, 7?”  Case closed.

So, altogether, the old one-to-10 scale is not up to snuff on three counts.  It’s even, not odd; it’s a bit large; and it is all positive.  But that scale and most other numerical scales have one thing going for them.  They don’t add spin.  Spin is to be found in non-numerical scales, and they are important when the political stakes are high.

Manipulation of Non-Numerical Scales for Political Purposes

We are talking about spinning political poll questions planned for public release, purporting to explain what the public itself wants.  Often these are “high-profile” polls that play a key role in a public relations campaign designed to have a major impact on the political agenda of the country.  Such campaigns can succeed in setting or changing the agenda.  They can make front page headlines.

A high-profile poll can be many thousands of times more significant than most of the hundred or so polls that policy organizations release every year, and typically are never seen or heard about by more than a million people.  This sounds like a large number, but is less than one half of 1% of the people of the United States.  The typical poll disappears from public view within a few days, forgotten by all but a relatively minuscule few people who are poll watchers.

A high-profile poll alone cannot affect a significant percentage of Americans.  It can serve as the keystone in the arch of the campaign that does have such an impact.  If its sponsors have enough political clout, funding, and mainstream news media access, the campaign can be that kind of a huge success.  The high-profile poll’s legitimacy is used to assure the elites that the American people are behind whatever it is that the sponsors are aiming for.  The critical role of high-profile polling is that it becomes a key element of the campaign that makes history. 

The high-profile poll sponsor, or some of its sponsors, or some of the key individuals representing a sponsor, are generally aiming higher than just getting some specific legislation enacted.  They aim to keep the United States favorable to their own interests or visions, essentially to shape the future of the country, the nature of our government and our society.  They may be looking to control the definition of what it means to be an American, a definition that may be very different in the future from what it has been in the past.  Some seek a definition of an America that somehow does not change even if change seems to many to be required ultimately for the prosperity of the United States, even someday our very survival. 

The sponsors are the special interests who may truly believe that their cause is America’s cause; that their view must prevail.  The campaign may be remarkably successful, even though the high-profile polling it depends on may be misleading.  The polling may produce some findings that are completely erroneous, or are deliberately and stealthily steered during the design, analysis and/or promotion phases into engineering consent for whatever the controlling individuals are seeking. 

Let us look at some examples of high-profile polls that falsified the voice of the people by the misleading use of scales.  One poll, mentioned in Chapter 6 was given twice to a random sample of people both before and after they attended the National Issues Convention (NIC) in Austin, Texas, in January 1996.  The concept that was being tested by the NIC was that after a weekend of deliberation on political issues “ordinary” people would make better choices.

Unbalanced Scales

These NIC survey question choices, I do believe were not chosen as part of a plan to change America, but rather were inadvertent and incompetent.

These results show how unbalanced response scales will pull answers toward the side with more options.  Question S11 of the NIC survey had a five-point scale, three on one side – “extremely,” “very,” and “somewhat willing” – and two on the other – “not very” and “never willing.”  The neutral point was not in the middle.

Here were the responses to the same poll asked before and after the convention:

S11.  In the future, how willing should the United States be to send troops to solve problems in other countries?

  Before After
      Convention
     
Extremely willing 2% 6%
     
Very willing 8% 12%  
     
Somewhat willing 53% 55%
     
Not very willing 24% 22%
     
Never willing 6% 2%

A fair fraction of respondents do not pay much attention to the interviewer’s instructions, which define the scale.  The words, “extremely,” “very,” “somewhat,” “not very,” and “never” flow by them quickly.  Few realize as the list of choices is read that the middle choice is not neutral.  As has been mentioned, many respond as if their thought process was something like, “On this one, I’m in the middle,” and “On that one, I’m at the top.”  If this kind of thinking were the dominant factor, then “somewhat willing,” the middle category, should be counted as neutral; if not, “somewhat willing,” by the meaning of the words themselves, should be counted among the willing.  In the former case, the conclusion is only “10% are willing.”  In the latter case, the conclusion is “63% are willing.”  A big difference.  Because of the five-point scale without the middle choice clearly being “neutral,” all we can know for sure from this question is that the public is somewhere between 10% and 63% willing before and between 18% and 73% after  the convention – an enormous range.  The finding is not informative.  It’s really a joke. 

What is informative is a small increase in willingness overall from “before” to “after” deliberation.  What constituted those deliberations and what was their effect?  The answer is wrapped up in the next section.

People Like Me

Questions that might otherwise be all right have potential problems for deliberative survey usage.  Three questions included in the National Issues Convention were:

1a.  People like me don’t have any say about what the government does.

  Before After
      Deliberation
     
Agree strongly 18% 6%
     
Agree somewhat 26% 25%
     
Disagree somewhat 31% 32%
     
Disagree strongly 25% 36% 

1b.  Public officials care a lot about what people like me think. 

  Before After
       Deliberation
     
Agree strongly 7% 11%
     
Agree somewhat 34% 49%
     
Disagree somewhat 37% 30%
     
Disagree strongly 19% 9%

1c.  Sometimes politics and government seem so complicated that a person like me can’t really understand what’s going on. 

  Before After
       Deliberation
     
Agree strongly 18% 17%
     
Agree somewhat 37% 42%
     
Disagree somewhat 23% 22%
     
Disagree strongly 20% 18%

 

Questions 1a and 1b were cited as evidence that the NIC “empowered” participants, that after deliberating for a weekend, people believed that leaders paid more attention to them because they were smartened up by the wisdom received from the deliberation.  At over $8,000 per head as the cost of the convention to the organizers, this model of empowerment holds little promise for citizens at large.  But what was really going on was yet another way of producing the “Down, boy” effect, first discussed in the last chapter. 

“What did ‘people like me’ mean to a person who had to answer a survey question before the convention not just for him/herself but on behalf of “people like me?”  The elitists who organized and funded the convention thought of “average” people or “ordinary” people without realizing the great range of diversity of respondents who were to participate in the convention:   a group as diverse as America itself, if the sampling was properly performed. 

At the convention, participants met with and were addressed by leading politicians of both political parties and by well-known news media reporters and editors.  The thinking of participants had to shift in the “after” survey toward “a person like me” as being one whose opinion is valued by the high and mighty who are speaking to us here, lecturing us, and listening to our responses, as well as arranging for televised proceedings, broadcast airtime on major channels, and paying for the weekend excursion of a thousand people to Austin from all around the country.  No one can say definitively whether the before and after response shifts were due to the various activities at the convention that were called “deliberation” or to the shift in thinking about who are people “like me”.  More recent research strengthens the case for the latter.

The Contract with America

In the fall of 1994 shortly before the election, the Republicans in Congress led by Newt Gingrich announced to the media a proposed deal.  If the American people elected a Republican Congress, the Republicans would make every effort to enact into law 10 policy proposals favored by the public, according to a poll that they had commissioned.  In the month before the election, the Republicans waged a campaign to get media coverage of what they called the “Contract with America.”  The campaign was not very compelling.  It is not surprising that the media treated it like a poor publicity stunt and ignored it. 

The next month, when the Republicans swept into power, the press tried to make up for their earlier inattention by calling the victory a landslide, christened it the Republican Revolution, referred to the contract in words that suggested a sacred text, followed the 10 contract items like 10 horse races for the next year or so, and gave the Republicans tens of millions of dollars of free publicity.  Actually, in the 1994 election only slightly more than 2% of voters shifted from Democrat to Republican compared to 1992.  The Republicans did not get a majority of registered voters in 1994.  The turnout, in fact, was on the low side.

The Republicans never released a survey showing that the public supported all the items of the contract.  They claimed that they hired Frank Luntz, the erstwhile Perot pollster, to do so.  Luntz, about a year later, when pressed, explained to an enterprising Knight-Ridder reporter, Frank Greve, that he had only tested item wording for the purpose of advertising slogans and found 60% support for them as catchy slogans, not for favoring legislation.  The Republican leaders no doubt felt, on the basis of the rhetoric they used with each other, that the public surely must favor these ideas.

The Oct. 3, 1994, USA Today edition listed the contract items as:

 1. A balanced budget amendment.

 2. “Anti-crime” measures, including tougher sentencing and death penalty rules, and more prisons.

 3. Cuts in welfare spending and a ban on welfare for minor-age mothers.

 4. “Family reinforcement” measures, including a tax credit for elderly care.

 5. A $500-per-child tax credit.

 6. Increased defense spending to restore “essential parts of our national security.”

 7. Repeal of 1993 increase in taxation of upper-income individuals’ Social Security benefits.

 8. A cut in the tax on capital gains.

 9. Limits on punitive damages from civil suits; reform of product liability laws.

10.Congressional term limits.

The Significance of the Contract

The campaign helped to create the Republican Revolution, which passed dozens of bills that did change the course of the nation.  The fact that some of the poll items were not supported by the public when tested by other pollsters was ignored by the media, and by the Democrats and Ross Perot’s Reform Party.  The guilty parties were not only the Republican Party, which created the outrageous fallacy of the contract with the claim of large public support for every item.  Also at fault were the Democratic party and Perot’s Reform Party, which having utilized no better polling than the Republicans on what the American people wanted, left the fallacy unchallenged during and after the campaign.  Finally, the mainstream news media can be blamed as well.  They neglected the counter-findings of other pollsters, including the balanced teams of pollsters working for ATI in Surveys #22, #24, and #28. 

So, can you spot the spin on the Republicans’ poll?  You might think it tough to do, since it was only a virtual poll.  That fact in itself is big-time spinning or lying by omission, mentioned in Chapters 1, 3 and 5 as the favorite method of spinning or burying undesired news stories by the media and by political leaders.  But the 10 contract items were even more virtual than that.  After Oct. 3, and throughout the following year, the Republicans substituted different items for the 10 above.  The only consistency in this sea of change was that there were always 10 items.  This public relations technique helped to implant the idea in the minds of the media that the contract had the timeless saliency of the Ten Commandments.    

But this book is about spin.  Look at those initial 10 items.  Plenty of spin there.  No size of “Don’t Knows.”  No scales.  No specific question wording.  Which of the 10 might be supported by a majority of Americans with some fair and balanced wording?  From the results of ATI and many other pollsters, the list would go something like this:

Supported: 4, 5, 10.

Support definitely depends on wording: 1, 2, 3, 9.

Not supported: 6, 7, 8.

A pretty poor score for a set of policies that the media — without even knowing the question wording — reported were favorites with the public.

The 1992 and 1996 Perot Campaigns

In the 1992 presidential campaign, Perot assumed a populist stance, in agreement with what public interest polling teaches, that the people are justified in believing that elected officials of both parties are beholden to special interests, out of touch with the people, and interested primarily in their own careers, incomes and elections.  Perot had for many years before 1992 championed electronic town meetings and promised that the cornerstone of his administration would be listening to the voice of the people. 

What Perot failed to mention was that, like the leaders of the two major parties and almost all politicians, he did not believe it necessary to do good survey research in order to know what people wanted.  Being very smart and politically in tune with crowds, he believed that he already knew what the people wanted, although he was careful to never say that directly.  As the great salesman he was, both at Electronic Data Systems and later, he knew how to sway prospects and other audiences by language, tone, demeanor, etc., and how to recover if he did not get it quite right, which occasionally happens to even great salesmen. 

In March of 1993, Perot conducted his first and only electronic town meeting – widely advertised and promoted – in a half hour purchased from ABC TV, which he turned into an infomercial, whose centerpiece was his Reform Party poll on what people wanted for governance.    

Many of Perot’s proposed policies were presented as all favorable and thus scored well with the public.  They had the spin of the choiceless choice.  You remember Mrs. Thatcher’s “There Is No Alternative” in Chapter 5.  Perot did not really want to know and conform to, any more any of the major party leaders, the people’s more sensible desires.  Perot was no populist. 

Here is an example to illustrate this point.  Perot’s campaign finance reform question, Q1, found 80% support, which Perot took as a mandate for his campaign:

Q1 (Perot).  Should laws be passed to eliminate all possibilities of special interests giving huge sums of money to candidates?

The bias of Q1, the lack of any negative statement about Perot’s proposal in the question or in a preamble, was rightly challenged by a highly regarded professional pollster, Warren Mitofsky, executive director of the organization supplying exit polling for four TV networks in campaign ’92.  Mitofsky submitted an op-ed letter in the New York Times which appeared with a five-column banner proclaiming, “Mr. Perot, You’re No Pollster.”  Because Perot’s was a high-profile poll with a lot of political significance, the Times published Mitofsky’s letter that compared Perot’s result to a Time/CNN poll that found only 40% favorable.  Mitofsky considered this Time/CNN poll comparable, and because it contained a counter-argument, also “more balanced.”

Q2 (Time/CNN). Should laws be passed to prohibit interest groups from contributing to campaigns or do groups have a right to contribute to the candidates they support?

However, note that Time/CNN had watered down Perot’s proposal by eliminating two features that gave it strength in the minds of the public:  (1) “…eliminate all possibilities…” (i.e., NO loopholes) and (2) “huge sums” (NOT small sums).  The public is well aware that reform laws are passed with loopholes, which make them ineffective, and is well aware that the problem in campaign financing is not small contributions but “huge sums.” Perot was not talking about small sums.  Some of the drop in support for Perot’s proposal came from weakening it and some from the counter-argument, which loomed stronger precisely because Perot’s proposal was weakened.  This illustrates a concept, first mentioned in Chapter 5:  Setting up a comparison of a weakly supported policy B to a policy A, which is a weakened version of a highly favored policy A’, increases support for B to the point where B is favored over A, while A’ is favored over B.  Stated this way, it is clear that trying to get people to think that A and A’ are the same, when they are not, is wicked.

But Perot’s favored proposals did not score always so well.  ATI tested a question with Perot’s wording:

Perot. 59% favored “Pass a balanced budget amendment to the Constitution with emergency funds limited exclusively to national defense.”

ATI had also obtained the following result with slightly different wording:

ATI.  72% favored, “Pass a balanced budget amendment to the Constitution with emergency funds exclusively to major national disasters.”

Worded slightly differently by enlarging “national defense” to “major national disasters” and scoring 72-59=13 points higher, this ATI poll question showed that Perot’s pat wording was a bit oversimplified.  Perot’s failure to note the strength of “major national disasters” and probably of several other exceptions illustrates again that Perot was not interested in finding policies most wanted by the people.

To give Perot his due, a question using Perot’s wording scored the highest when tested against over 50 different proposals for government reform in several ATI surveys.  It was favored by 80% as follows:

80% favored, “Reduce the salary and benefits of members of Congress to let them know that we really want spending cuts and that cuts should start at the top with themselves.”

We will see in later chapters still other spin techniques used in high-profile polls to make monkeys of the media.  Stay tuned.

[Click here to read on into Chapter 9 ...]

Copyrightt © 2014 EthicalMarkets.com | Supporting the emergence of a sustainable, green, ethical and a just economy worldwide