Chapter 12: Historical Persistence of Public Opinion

Chapter 12: Historical Persistence of Public Opinion
By Alan F. Kay, PhD
© 2004, (fair use with attribution and copy to author)
Sept. 10, 2004                    

Public support for a government policy is persistent, meaning that it changes very slowly over the years — unless a major event relevant to the issue occurs, in which case support generally shifts, if at all, in the expected direction.  Moreover, when such a shift occurs for the public as a whole, a parallel shift prevails for each major demographic sector except, perhaps, in the rare case that the policy is aimed at affecting a particular sector.

A related phenomenon, called historical persistence, can extend over decades, when new and old events occur that have features and aspects with considerable similarity. The reward for those with some understanding of the complex socio-economic-political relationship between the elites and the people, and who also start to study historical public opinion, will be a growing ability to predict from the old events much of what will happen as the new events begin to unfold.  It is easier to understand the amazing significance of such findings by exploring case studies rather than seeking theory, starting with the first of three examples:

The All-Time Persistence Record – 68 Years.

Scientific polling started when George Gallup, Sr. predicted with startling accuracy the 1936 re-election of Franklin Delano Roosevelt (FDR). Other highly regarded polls showed FDR losing to the Republican candidate, Alf Landon. Gallup developed a random-sample method based on sampling theory, well known to statisticians.  Gallup’s success in predicting the re-election of FDR in 1936 opened the modern era of polling, which soon discredited “straw” or self-selected polls that had been around since the 19th Century.  Gallup’s methodology, as refined, is the only application of the scientific method in all of so-called “political science.” (Please don’t say that to a political science professor.  He or she might break into tears.)

The story that spans the 68-year-old era of scientific polling links FDR to current President George W. Bush.  The link appears in polling data comparing public support for FDR prior to and after the attack on Pearl Harbor with support for George W. Bush before and after the events of Sept. 11, 2001. No other events of modern times can be compared with either of these two, but they can be compared with each other. The eras were very different. So were many aspects of the events themselves. That said, one of the similarities, not previously noticed, is that public support levels, and trends over time, for the two leaders prove to be uncannily similar. Another similarity is that the early pollsters, like Gallup and Roper, did surveys that explored the public’s interest, while only in the last few years has the importance of public-interest polling started to be realized again. Five years ago, one of George Gallup Sr.’s associates back in the 1960s, Winston (Wink) Franklin, wrote “Gallup was a true believer in public-interest polling and would be appalled if he could see how opinion polling is used today.”  Polling indeed has become an industry dominated by commercial pollsters, including today’s huge Gallup and Roper organizations, catering to moneyed special interests. A tide of high priced pollsters, campaign managers, political advisors, pundits and media overwhelmed the earlier political landscape of FDR’s era.

Bush’s job approval rating by the public before 9/11 hovered in the range 55% to 62% for the eight prior months of his presidency. Over many years, my colleagues and I conducted public interest polls examining why and when the U.S. public favors the use of force. An unexpected attack on the U.S. homeland, our data showed, would produce nearly unanimous support for action against the perpetrators. In the case of 9/11, the required action had to be a global effort to track down and bring to justice those responsible.

Immediately after 9/11, Bush’s ratings shot up. Within a few weeks, Bush had made clear he was pursuing a course very close to what most people wanted. His rating peaked at 90%. For the year thereafter it remained high, but slowly drifted down, crossing 66% August 2002.

Bush’s high ratings enabled him to get from Congress almost whatever he wanted. High presidential approval ratings in wartime also squelch alternatives to the president’s domestic legislative and regulatory initiatives. Declaring war gave Bush an opportunity to push his domestic agenda successfully.

How do FDR’s approval ratings compare with Bush’s? In his second term, well before the start of WW II in Europe, FDR’s approval ratings were within the same range as Bush’s, as follows:

Roper/Fortune: May 1938, 55%;
Gallup: 1937: Nov., 63%; 1938: May, 54%;    July, 52%;    Sept., 52%.

An important part of this story is the opportunity for discovery that drove the initiators of the era of scientific polling. Polling was so new that experimenting and learning from survey to survey occurred at every opportunity. Tests on the effects of different question wordings were routine. Gallup and Roper found that many small changes in wording and formatting made almost no difference in the responses to poll questions, but some did. Here is an example that is important for our Bush/FDR comparison. Before Sept. 1938, Gallup’s wording of the rating question was crude, “Are you for or against Roosevelt today?” After September 1938, Gallup switched to what was proving to be a more standard “approve/ disapprove” wording that from November 1938 through July 1940 produced approval for FDR in a narrow 56%-64% band. Dips down to 52% no longer occurred. So, for similar question wording, Bush’s approval band, 55%-62%, before Sept. 11, 2001, was virtually the same as Roosevelt’s 56%-64% before Dec. 17, 1941.

After Dec. 7, 1941, it was wartime. Gallup ran surveys once or twice a month. Roper specialized in questions on approval of Roosevelt’s attitude toward specific issues and legislation. Looking closely at the evolution of their question design and wording, it is clear that both were seeking reliability and consistency of responses with reasonable concern with the fairness, balance, and accuracy of their findings.

The early pollsters felt free to make occasional word substitutions to create new questions. For example, along with the standard “Do you approve or disapprove today of Roosevelt as president,” in March 1940, Gallup asked a question never before tried, “Do you approve or disapprove of the way Mrs. Roosevelt has conducted herself as First Lady?” Eleanor received a 68% approval rating, eight points higher than Franklin’s 60%. The “First Lady” question was never asked again. Today commercial pollsters, as we noted in Chapter 2, would consider such questions irrelevant for their purposes, which are to tailor question wording and formatting to maximize public support for the policies that best satisfy their client’s financial backers.

For the first year of the war, Gallup surveys showed support for the way FDR was handling his job, as follows: January 1942 over 85% approval, slowly dropping as follows: February, 82%; March-May, 80%; June-August, 77%; September-November, 72% — very similar to the high, but dropping off, ratings George W. Bush received in the three years since Sept. 11, 2001.

After Dec. 17, 1941, approval for FDR’s handling of domestic issues, like his overall rating, initially very high, slowly dropped off. It is amazing that the founder of rare random sample polling asked this question with a phrase (see italics) that is still sometimes used today.

Gallup:  “Do you approve/disapprove of President Roosevelt’s policies here at home?”           

January 1942, 77%            February 1942, 73%,   June 1942, 71%

Pushing the analogy to its limit, this data from six-decade-old archives suggests that for at least six months after Sept. 11, 2001, Bush would be successful with both international and domestic initiatives, and that was correct.

On the other hand, considering specific issues and legislation, whether it was early pollsters questioning FDR’s support or current pollsters questioning Bush’s support, results vary widely from one issue to the next, sometimes favorable and sometimes not.

What can be done with these findings?  In the last year, the obvious and unique parallel between the 9/11/01 and 12/7/41 attacks was noted by many people, but did anyone think to look into polling databases during the year to get an insight into how Bush’s ratings might unfold?  To my knowledge, no.

Having written about the stability and persistence of public-interest poll findings, even I, too, was surprised to find both the size and trends in public support for both the presidents’ domestic and international policies would be so consistent over a 60-year period. Though relatively few, there are still enough public-interest polls conducted now to be compared with those few of 60 years ago, demonstrating an uncanny similarity between public opinion levels and shifts during the old era and the new when the circumstances are sufficiently similar for the two — as we have seen comparing the Roosevelt era with the Bush era. The stability of public opinion when unusual conditions repeat themselves is worth examining and may continue to prove awesome.

Second Example: “How much of the time do you trust the government in Washington to do what’s right?”

This question, word-for-word, has been asked annually for 46 years — often much more frequently than that. Respondents are asked to choose one of three responses:

1. “Just about always,” 2. “Most of the time,” or 3. “Only some of the time.”

The findings show a long-term trend of increasing mistrust, leading us to label this as the “Mistrust” question. Starting at a low of 22-23% in the years 1958-1965, “only some of the time” climbed to a high plateau of about 79% during 1993-1997. The all-time high was 82% in November 1993. Over the years 1958-1997, “most of the time” dropped from about 60% to 15%. Support for “just about always” has been in the single digits — negligible — during the 30-year period of 1967-1997.

This growth of mistrust was by no means linear or smooth. During times of government scandals from Watergate to Whitewater, mistrust increased much more rapidly than the long-term trend line. The easiest way to understand the phenomenon is to know that mistrust resulting from major scandals peters out in a few years — not down to where it was before the scandal broke, but about half way down. Noticing this from years of survey research has yielded a public-interest polling rule of thumb, “It takes about twice the amount of good happening to produce as much increase in trust as the amount of bad to cause loss of trust.” Though crude and simplistic, the belief, “Once bitten, twice shy,” seems to characterize the public view.

In the eight-year period following — June 1993 to March 2001 — the mistrust trend line decreased slowly from 79% to 69%. The trend line is the best-fitting straight line for the mistrust question data points available from the largest polling database repository. A dozen well-respected polling organizations, most asking the question repeatedly in a series of surveys, had asked the mistrust question 42 times in that period. March 2001 was the date of the last asking prior to the event that changed the world, Sept. 11, 2001. After being asked almost monthly for the previous three years, it is odd that the mistrust question was not asked by any pollster in the six months period from March 2001 until two weeks after Sept. 11, when the Washington Post found mistrust had dropped enormously, down to 36%.

During September-December 2001, the mistrust question was asked four more times, producing “only some of the time” support bouncing up and down a lot in the range of 31% to 53%. In view of the anthrax scares that occurred then, on top of the roller-coaster aftermath of 9/11, this rapid fluctuation was not surprising. The 31% low is 38 points below the 69% low of the trend line. For understanding public opinion under stressful conditions, these findings are very significant.

Look at it this way.

After Sept. 11, for the first and only time in the 45-year history of the mistrust question, mistrust plummeted and trust in government rapidly rose for reasons having little to do with government scandals. The explanation that best fits the facts is a new public-interest poll finding, apparently previously unnoticed. When the homeland is as seriously threatened as it was then, about one-third of Americans who — if asked — would have previously expressed their mistrust in government, were now ready to turn around and trust the government.  Seemingly independent of any improvement in its performance, government was seen as the only force realistically imaginable that could help the United States overcome such a traumatic setback.

From February through October 2003, the mistrust question has been asked in polls another 12 times. Mistrust rose sharply after June 2002, up to 61%, when the U.S. government began focusing on Iraq. Many thought an Iraq invasion only tenuously and indirectly concerned homeland security. Mistrust after June was half-way back to its all-time high, and more than two-thirds back to its early 2001, pre-Sept. 11 levels. It is reasonable to reduce the public’s new attitude that “Mistrust is back” to “Government? — what’s it done for us lately?”

A new, severe and successful al-Qaeda attack on U.S. soil might quickly boost trust in government to the levels reached for the three months after Sept. 11. That would be a big boost in public support for President George W. Bush. What’s good for al-Qaeda turns out to be good for Bush.

It is already known that the relationship works in reverse. The very nature of terrorism does not discriminate between ordinary, innocent people and elites. To al-Qaeda, the “innocents” are seen to be less innocent the more they support their president. If public support for Bush rises, al-Qaeda, at least in the Muslim world can increasingly justify its terrorist acts.  Does this two-way relationship seem weird? Psychiatrists call it “co-dependency.” There are other government reform findings that support such types of co-dependence.

The larger picture that emerges from a study of these findings is that responses in public-interest polls have a persistency not unlike a country’s culture itself. A culture resists change. It tends to fend off novel challenges — new art, music, status attitudes, food acceptance, etc. When new developments are persistent and strong enough, aspects of culture do change, but only as little as necessary to keep most people reassured.  Close study can uncover and identify these developments and their underlying factors.

This example illustrates that the persistence, resistance to change, and flipping as little as possible also characterize the long-term morphing of attitudes revealed by good public-interest polls.

Third Example: Gulf War, 1990-1991, Lessons  for Further Middle-East Invasions.  During and shortly after the Gulf War, my colleagues and I conducted four surveys that may now tell us something about how public attitudes will unfold in the course of an accelerating confrontation with Iraq leadership or lack thereof.  One important difference is that in the first Gulf War the issue was oil — not weapons of mass destruction, not regime change, not finding al-Qaeda, not making Iraq into a model democracy.  Many, looking at the oil interests that permeate the second Bush administration, believe that the issue is still oil.  So far, however, the mainstream press only carries that linkage incidentally, as in cartoons.

James Baker, then U.S. secretary of state, in 1990 explained to the troops sitting in the Arabian desert and facing war that their purpose was protecting “jobs, jobs, jobs.”  The administration of President George H.W. Bush  presented publicly no high-minded objective like “bringing democracy to the Middle East” or producing “a lasting peace in the Middle East,” which 87% of the people said in ATI’s survey #14 was “very” or “extremely” important as an objective of the impending war.  Only more recently has a president come to understand through ATI public-interest polling when the American people favor the use of force and what leaders have to do to get support for war.  George W. Bush has fully and masterfully adopted those ideas.

Some problems and opportunities of the two situations 12 years apart are surprisingly similar.  One similarity is that during the five-month, U.S.-led Gulf War build-up along the Arabian-Iraq-Kuwait border, it seemed that the only thing that would stop the U.S. invasion of Iraq would be Iraq’s prior capitulation. A question asked at that time was introduced with this lead-in:

“Here are some things that some people thought we should have done but did not do before the situation in the Persian Gulf happened.  As I mention each one, please tell me if you think it would have helped a lot, helped a little, or not helped at all, to make the confrontation with Iraq unnecessary” . . . .

 

The question then went on to present nine things that were not done, six of which got 70+% consensus support for helping “at least a little” and all nine had majority support.  Here are the responses in rank order of “helped a lot:”

Helped a lot
1. Supported increased research and development of energy sources other than oil 59%
2. Waged a campaign to increase energy efficiency and conservation in autos, homes, offices and factories 51%
3. Given more incentives to oil companies for exploration and recovery operations in places outside the Middle East 47%
4. Strengthened the UN peace keeping capabilities 47%
5 Further increased our government strategic oil reserves 46%
6. Continued the mandatory annual improvement in miles per gallon of U.S. autos discontinued in 1984 41%
7. Not aided Iraq in the eight-year Iran-Iraq war just ended 33%
8. Put a tax on foreign oil of 5 cents per gallon more each year for 10 years, totaling 50 cents at the end of ten years 19%
9. Supported Israel less and the Arab nations more 17%

Talk about issues where the people differ from their leaders, look at the four that ranked first, second, third and sixth!  How long before any U.S. leader wanted to give those ideas any mention?  It was seven long years later that President Bill Clinton began to offer such proposals. When ATI asked for the one thing that would have helped the most from among these nine proposals, the first above still topped the list, but the seventh came in second.  Many people, not thinking about the fact that the United States had aided Iraq in the Iran-Iraq war, until reminded in this question set, boosted that item up to second place in importance.

While questions like these are very revealing, some may be very misleading when taken out of context or not examined closely.  We close with an inadvertently misleading question illustrating this point, asked in October 1987, three years before the Gulf War, with lead-in (i.e., frame):

“I’m going to read the names of the countries that are thought to have a nuclear capability.  After I read the list, please tell me which one or two of these countries gives you the greatest concern – I mean which ONE OR TWO you feel would be the most likely to explode a nuclear weapon” . . .

Notice how “Iraq” topped the percent mentions list, far above all others:

Iraq 64% United States 7%
Soviet Union 36% France 3%
Israel 18% Britain 1%
China 16% Others/
India   9%    DK 10%

Note that neither Iran nor any other Arab or Muslim country was included in the list offered.  Such additional offerings would have detracted from Iraq mentions, probably substantially.  We simply don’t know if Iraq would have dropped from first place.

Please do not use this question as an example of any mystical prognosticative ability of the people.  There is some wisdom among “the people” that emerges when asking what people want for policy and legislation.  It does not emerge in questions asking the people to make predictions.  Public-interest polling shows from many examples that the public is no better at prediction than the experts.  Neither is very good.

Beyond the Three Historical Persistence Examples.  There are other historical events that have stimulated survey findings worth studying. Included are those end-of-an-era questions that the mainstream pollsters dismiss, such as Kathy Frankovic, head of CBS polling, who told us in Chapter 2, CBS does not do “social research.”  Locating Consensus for Democracy explored public opinion on why the Soviet Union collapsed, why it collapsed when it did and not sometime earlier or later (pages 66-69), what lessons the United States learned from the war in Vietnam (p. 108), and similar end-of-era questions.  It is important for the elite to understand public reactions, opinions and attitudes that have built up over the long years that mark an era.  Those who do not learn from history are destined to repeat old mistakes.

Coming Soon Chapter 13