19 Public Opinion

Learning Objectives

By the end of this section, you will be able to:

  • Define public opinion and political socialization
  • Explain the process and role of political socialization in the U.S. political system
  • Compare the ways in which citizens learn political information
  • Explain how beliefs and ideology affect the formation of public opinion
  • Explain how information about public opinion is gathered
  • Identify common ways to measure and quantify public opinion
  • Analyze polls to determine whether they accurately measure a population’s opinions
  • Explain the circumstances that lead to public opinion affecting policy
  • Compare the effects of public opinion on government branches and figures
  • Identify situations that cause conflicts in public opinion

The collection of public opinion through polling and interviews is a part of American political culture. Politicians want to know what the public thinks. Campaign managers want to know how citizens will vote. Media members seek to write stories about what Americans want. Every day, polls take the pulse of the people and report the results. And yet we have to wonder: Why do we care what people think?

What Is Public Opinion?

Public opinion is a collection of popular views about something, perhaps a person, a local or national event, or a new idea. For example, each day, a number of polling companies call Americans at random to ask whether they approve or disapprove of the way the president is guiding the economy.[1]

When situations arise internationally, polling companies survey whether citizens support U.S. intervention in places like Syria or Ukraine. These individual opinions are collected together to be analyzed and interpreted for politicians and the media. The analysis examines how the public feels or thinks, so politicians can use the information to make decisions about their future legislative votes, campaign messages, or propaganda.

But where do people’s opinions come from? Most citizens base their political opinions on their beliefs[2] and their attitudes, both of which begin to form in childhood. Beliefs are closely held ideas that support our values and expectations about life and politics. For example, the idea that we are all entitled to equality, liberty, freedom, and privacy is a belief most people in the United States share. We may acquire this belief by growing up in the United States or by having come from a country that did not afford these valued principles to its citizens.

Our attitudes are also affected by our personal beliefs and represent the preferences we form based on our life experiences and values. A person who has suffered racism or bigotry may have a skeptical attitude toward the actions of authority figures, for example.

Over time, our beliefs and our attitudes about people, events, and ideas will become a set of norms, or accepted ideas, about what we may feel should happen in our society or what is right for the government to do in a situation. In this way, attitudes and beliefs form the foundation for opinions.

Political Socialization

At the same time that our beliefs and attitudes are forming during childhood, we are also being socialized; that is, we are learning from many information sources about the society and community in which we live and how we are to behave in it. Political socialization is the process by which we are trained to understand and join a country’s political world, and, like most forms of socialization, it starts when we are very young. We may first become aware of politics by watching a parent or guardian vote, for instance, or by hearing presidents and candidates speak on television or the Internet, or seeing adults honor the American flag at an event. As socialization continues, we are introduced to basic political information in school. We recite the Pledge of Allegiance and learn about the Founding Fathers, the Constitution, the two major political parties, the three branches of government, and the economic system.

Photo A shows former prime minister of Greenland Hans Enoksen and a child putting a slip of paper in a wooden box. Photo B shows an officer in a navy uniform giving a small American flag to a child.
Political socialization begins early. Hans Enoksen, former prime minister of Greenland, receives a helping hand at the polls from five-year-old Pipaluk Petersen (a). Intelligence Specialist Second Class Tashawbaba McHerrin (b) hands a U.S. flag to a child visiting the USS Enterprise during Fleet Week in Port Everglades, Florida. (credit a: modification of work by Leiff Josefsen; credit b: modification of work by Matthew Keane, U.S. Navy)

By the time we complete school, we have usually acquired the information necessary to form political views and be contributing members of the political system. A young man may realize he prefers the Democratic Party because it supports his views on social programs and education, whereas a young woman may decide she wants to vote for the Republican Party because its platform echoes her beliefs about economic growth and family values.

Accounting for the process of socialization is central to our understanding of public opinion, because the beliefs we acquire early in life are unlikely to change dramatically as we grow older.[3]

Our political ideology, made up of the attitudes and beliefs that help shape our opinions on political theory and policy, is rooted in who we are as individuals. Our ideology may change subtly as we grow older and are introduced to new circumstances or new information, but our underlying beliefs and attitudes are unlikely to change very much, unless we experience events that profoundly affect us. For example, family members of 9/11 victims became more Republican and more political following the terrorist attacks.[4]

Similarly, young adults who attended political protest rallies in the 1960s and 1970s were more likely to participate in politics in general than their peers who had not protested.[5]

If enough beliefs or attitudes are shattered by an event, such as an economic catastrophe or a threat to personal safety, ideology shifts may affect the way we vote. During the 1920s, the Republican Party controlled the House of Representatives and the Senate, sometimes by wide margins.[6]

After the stock market collapsed and the nation slid into the Great Depression, many citizens abandoned the Republican Party. In 1932, voters overwhelmingly chose Democratic candidates, for both the presidency and Congress. The Democratic Party gained registered members and the Republican Party lost them.[7]

Citizens’ beliefs had shifted enough to cause the control of Congress to change from one party to the other, and Democrats continued to hold Congress for several decades. Another sea change occurred in Congress in the 1994 elections when the Republican Party took control of both the House and the Senate for the first time in over forty years.

Today, polling agencies have noticed that citizens’ beliefs have become far more polarized, or widely opposed, over the last decade.[8]

To track this polarization, Pew Research conducted a study of Republican and Democratic respondents over a twenty-five-year span. Every few years, Pew would poll respondents, asking them whether they agreed or disagreed with statements. These statements are referred to as “value questions” or “value statements,” because they measure what the respondent values. Examples of statements include “Government regulation of business usually does more harm than good,” “Labor unions are necessary to protect the working person,” and “Society should ensure all have equal opportunity to succeed.” After comparing such answers for twenty-five years, Pew Research found that Republican and Democratic respondents are increasingly answering these questions very differently. This is especially true for questions about the government and politics. In 1987, 58 percent of Democrats and 60 percent of Republicans agreed with the statement that the government controlled too much of our daily lives. In 2012, 47 percent of Democrats and 77 percent of Republicans agreed with the statement. This is an example of polarization, in which members of one party see government from a very different perspective than the members of the other party.[9]

Chart shows the widening partisan differences in political values between 1987 and 2012. In the center of the chart is a vertical axis line. On the right side of the line are the years 1987 through 2012 marked with ticks. On the left side of the line are percentages, labeled
Over the years, Democrats and Republicans have moved further apart in their beliefs about the role of government. In 1987, Republican and Democratic answers to forty-eight values questions differed by an average of only 10 percent, but that difference has grown to 18 percent over the last twenty-five years.

Political scientists noted this and other changes in beliefs following the 9/11 terrorist attacks on the United States, including an increase in the level of trust in government[10] and a new willingness to limit liberties for groups or citizens who “[did] not fit into the dominant cultural type.”[11]

According to some scholars, these shifts led partisanship to become more polarized than in previous decades, as more citizens began thinking of themselves as conservative or liberal rather than moderate.[12]

Some believe 9/11 caused a number of citizens to become more conservative overall, although it is hard to judge whether such a shift will be permanent.[13]

Socialization Agents

An agent of political socialization is a source of political information intended to help citizens understand how to act in their political system and how to make decisions on political matters. The information may help a citizen decide how to vote, where to donate money, or how to protest decisions made by the government.

The most prominent agents of socialization are family and school. Other influential agents are social groups, such as religious institutions and friends, and the media. Political socialization is not unique to the United States. Many nations have realized the benefits of socializing their populations. China, for example, stresses nationalism in schools as a way to increase national unity.[14]

In the United States, one benefit of socialization is that our political system enjoys diffuse support, which is support characterized by a high level of stability in politics, acceptance of the government as legitimate, and a common goal of preserving the system.[15]

These traits keep a country steady, even during times of political or social upheaval. But diffuse support does not happen quickly, nor does it occur without the help of agents of political socialization.

For many children, family is the first introduction to politics. Children may hear adult conversations at home and piece together the political messages their parents support. They often know how their parents or grandparents plan to vote, which in turn can socialize them into political behavior such as political party membership.[16]

Children who accompany their parents on Election Day in November are exposed to the act of voting and the concept of civic duty, which is the performance of actions that benefit the country or community. Families active in community projects or politics make children aware of community needs and politics.

Introducing children to these activities has an impact on their future behavior. Both early and recent findings suggest that children adopt some of the political beliefs and attitudes of their parents.[17]

Children of Democratic parents often become registered Democrats, whereas children in Republican households often become Republicans. Children living in households where parents do not display a consistent political party loyalty are less likely to be strong Democrats or strong Republicans, and instead are often independents.[18]

Chart shows the percentage intergenerational resemblance in partisan orientation in 1992. People who identify as strong democrat reported their parents’ political orientation as follows: 31% reported both of their parents as democrats, 6% reported both of their parents as republicans, and 10% reported no consistent partisanship among parents. Weak democrats reported their parents’ political orientation as follows: 27% reported both parents as democrat, 6% reported both their parents as republicans, and 14% reported no consistent partisanship among parents. Independent democrats reported their parents’ political orientation as follows: 14% reported both parents as democrats, 6% reported both parents as republicans, and 18% reported no consistent partisanship among parents. Pure independents reported their parents’ political orientation as follows: 7% reported both parents as democrats. 7% reported both parents as republicans. 17% reported no consistent partisanship among parents. Independent republicans reported their parents’ political orientation as follows: 7% reported both parents as democrats, 16% reported both parents as republicans. 16% reported no consistent partisanship among parents. Weak republicans reported their parents’ political orientation as follows: 8% reported both parents as democrats, 32% reported both parents as republicans, 14% reported no consistent partisanship among parents. Strong republicans reported their parents’ political orientation as follows: 6% reported both parents as democrats, 27% report both parents as republicans, and 9% reported no consistent partisanship among parents. At the bottom of the chart, a source is cited:
A parent’s political orientation often affects the political orientation of his or her child.

While family provides an informal political education, schools offer a more formal and increasingly important one. The early introduction is often broad and thematic, covering explorers, presidents, victories, and symbols, but generally the lessons are idealized and do not discuss many of the specific problems or controversies connected with historical figures and moments. George Washington’s contributions as our first president are highlighted, for instance, but teachers are unlikely to mention that he owned slaves. Lessons will also try to personalize government and make leaders relatable to children. A teacher might discuss Abraham Lincoln’s childhood struggle to get an education despite the death of his mother and his family’s poverty. Children learn to respect government, follow laws, and obey the requests of police, firefighters, and other first responders. The Pledge of Allegiance becomes a regular part of the school day, as students learn to show respect to our country’s symbols such as the flag and to abstractions such as liberty and equality.

As students progress to higher grades, lessons will cover more detailed information about the history of the United States, its economic system, and the workings of the government. Complex topics such as the legislative process, checks and balances, and domestic policymaking are covered. Introductory economics classes teach about the various ways to build an economy, explaining how the capitalist system works. Many high schools have implemented civic volunteerism requirements as a way to encourage students to participate in their communities. Many offer Advanced Placement classes in U.S. government and history, or other honors-level courses, such as International Baccalaureate or dual-credit courses. These courses can introduce detail and realism, raise controversial topics, and encourage students to make comparisons and think critically about the United States in a global and historical context. College students may choose to pursue their academic study of the U.S. political system further, become active in campus advocacy or rights groups, or run for any of a number of elected positions on campus or even in the local community. Each step of the educational system’s socialization process will ready students to make decisions and be participating members of political society.

We are also socialized outside our homes and schools. When citizens attend religious ceremonies, as 70 percent of Americans in a recent survey claimed,[19] they are socialized to adopt beliefs that affect their politics. Religion leaders often teach on matters of life, death, punishment, and obligation, which translate into views on political issues such as abortion, euthanasia, the death penalty, and military involvement abroad. Political candidates speak at religious centers and institutions in an effort to meet like-minded voters. For example, Senator Ted Cruz (R-TX) announced his 2016 presidential bid at Liberty University, a fundamentalist Christian institution. This university matched Cruz’s conservative and religious ideological leanings and was intended to give him a boost from the faith-based community.

Friends and peers too have a socializing effect on citizens. Communication networks are based on trust and common interests, so when we receive information from friends and neighbors, we often readily accept it because we trust them.[20]

Information transmitted through social media like Facebook is also likely to have a socializing effect. Friends “like” articles and information, sharing their political beliefs and information with one another.

Media—newspapers, television, radio, and the Internet—also socialize citizens through the information they provide. For a long time, the media served as gatekeepers of our information, creating reality by choosing what to present. If the media did not cover an issue or event, it was as if it did not exist. With the rise of the Internet and social media, however, traditional media have become less powerful agents of this kind of socialization.

Another way the media socializes audiences is through framing, or choosing the way information is presented. Framing can affect the way an event or story is perceived. Candidates described with negative adjectives, for instance, may do poorly on Election Day. Consider the recent demonstrations over the deaths of Michael Brown in Ferguson, Missouri, and of Freddie Gray in Baltimore, Maryland. Both deaths were caused by police actions against unarmed African American men. Brown was shot to death by an officer on August 9, 2014. Gray died from spinal injuries sustained in transport to jail in April 2015. Following each death, family, friends, and sympathizers protested the police actions as excessive and unfair. While some television stations framed the demonstrations as riots and looting, other stations framed them as protests and fights against corruption. The demonstrations contained both riot and protest, but individuals’ perceptions were affected by the framing chosen by their preferred information sources.[21]

Image A is of a large crowd of people. Some of the people are holding signs. Image B is of a crowd of people. In the foreground three people stand on a car. A fourth person holds a traffic cone against the car’s windshield. In the background is a crowd of people along a road.
Images of protestors from the Baltimore “uprising” (a) and from the Baltimore “riots” (b) of April 25, 2015. (credit a: modification of work by Pete Santilli Live Stream/YouTube; credit b: modification of work by “Newzulu”/YouTube)

Finally, media information presented as fact can contain covert or overt political material. Covert content is political information provided under the pretense that it is neutral. A magazine might run a story on climate change by interviewing representatives of only one side of the policy debate and downplaying the opposing view, all without acknowledging the one-sided nature of its coverage. In contrast, when the writer or publication makes clear to the reader or viewer that the information offers only one side of the political debate, the political message is overt content. Political commentators like Rush Limbaugh and publications like Mother Jones openly state their ideological viewpoints. While such overt political content may be offensive or annoying to a reader or viewer, all are offered the choice whether to be exposed to the material.

Socialization and Ideology

The socialization process leaves citizens with attitudes and beliefs that create a personal ideology. Ideologies depend on attitudes and beliefs, and on the way we prioritize each belief over the others. Most citizens hold a great number of beliefs and attitudes about government action. Many think government should provide for the common defense, in the form of a national military. They also argue that government should provide services to its citizens in the form of free education, unemployment benefits, and assistance for the poor.

When asked how to divide the national budget, Americans reveal priorities that divide public opinion. Should we have a smaller military and larger social benefits, or a larger military budget and limited social benefits? This is the guns versus butter debate, which assumes that governments have a finite amount of money and must choose whether to spend a larger part on the military or on social programs. The choice forces citizens into two opposing groups.

Divisions like these appear throughout public opinion. Assume we have four different people named Garcia, Chin, Smith, and Dupree. Garcia may believe that the United States should provide a free education for every citizen all the way through college, whereas Chin may believe education should be free only through high school. Smith might believe children should be covered by health insurance at the government’s expense, whereas Dupree believes all citizens should be covered. In the end, the way we prioritize our beliefs and what we decide is most important to us determines whether we are on the liberal or conservative end of the political spectrum, or somewhere in between.

Ideologies and the Ideological Spectrum

One useful way to look at ideologies is to place them on a spectrum that visually compares them based on what they prioritize. Liberal ideologies are traditionally put on the left and conservative ideologies on the right. (This placement dates from the French Revolution and is why liberals are called left-wing and conservatives are called right-wing.) The ideologies at the ends of the spectrum are the most extreme; those in the middle are moderate. Thus, people who identify with left- and right-wing ideologies identify with beliefs to the left and right ends of the spectrum, while moderates balance the beliefs at the extremes of the spectrum.

In the United States, ideologies at the right side of the spectrum prioritize government control over personal freedoms. They range from fascism to authoritarianism to conservatism. Ideologies on the left side of the spectrum prioritize equality and range from communism to socialism to liberalism. Moderate ideologies fall in the middle and try to balance the two extremes.

A political spectrum shows the political stance from the left wing to the right wing. Starting in the left wing, which is labeled
People who espouse left-wing ideologies in the United States identify with beliefs on the left side of the spectrum that prioritize equality, whereas those on the right side of the spectrum emphasize control.

Fascism promotes total control of the country by the ruling party or political leader. This form of government will run the economy, the military, society, and culture, and often tries to control the private lives of its citizens. Authoritarian leaders control the politics, military, and government of a country, and often the economy as well.

Conservative governments attempt to hold tight to the traditions of a nation by balancing individual rights with the good of the community. Traditional conservatism supports the authority of the monarchy and the church, believing government provides the rule of law and maintains a society that is safe and organized. Modern conservatism differs from traditional conservatism in assuming elected government will guard individual liberties and provide laws. Modern conservatives also prefer a smaller government that stays out of the economy, allowing the market and business to determine prices, wages, and supply.

Classical liberalism believes in individual liberties and rights. It is based on the idea of free will, that people are born equal with the right to make decisions without government intervention. It views government with suspicion, since history includes many examples of monarchs and leaders who limited citizens’ rights. Today, modern liberalism focuses on equality and supports government intervention in society and the economy if it promotes equality. Liberals expect government to provide basic social and educational programs to help everyone have a chance to succeed.

Under socialism, the government uses its authority to promote social and economic equality within the country. Socialists believe government should provide everyone with expanded services and public programs, such as health care, subsidized housing and groceries, childhood education, and inexpensive college tuition. Socialism sees the government as a way to ensure all citizens receive both equal opportunities and equal outcomes. Citizens with more wealth are expected to contribute more to the state’s revenue through higher taxes that pay for services provided to all. Socialist countries are also likely to have higher minimum wages than non-socialist countries.

In theory, communism promotes common ownership of all property, means of production, and materials. This means that the government, or states, should own the property, farms, manufacturing, and businesses. By controlling these aspects of the economy, Communist governments can prevent the exploitation of workers while creating an equal society. Extreme inequality of income, in which some citizens earn millions of dollars a year and other citizens merely hundreds, is prevented by instituting wage controls or by abandoning currency altogether. Communism presents a problem, however, because the practice differs from the theory. The theory assumes the move to communism is supported and led by the proletariat, or the workers and citizens of a country.[22]

Human rights violations by governments of actual Communist countries make it appear the movement has been driven not by the people, but by leadership.

We can characterize economic variations on these ideologies by adding another dimension to the ideological spectrum above—whether we prefer that government control the state economy or stay out of it. The extremes are a command economy, such as existed in the former Soviet Russia, and a laissez-faire (“leave it alone”) economy, such as in the United States prior to the 1929 market crash, when banks and corporations were largely unregulated. Communism prioritizes control of both politics and economy, while libertarianism is its near-opposite. Libertarians believe in individual rights and limited government intervention in private life and personal economic decisions. Government exists to maintain freedom and life, so its main function is to ensure domestic peace and national defense. Libertarians also believe the national government should maintain a military in case of international threats, but that it should not engage in setting minimum wages or ruling in private matters, like same-sex marriage or the right to abortion.[23]

The point where a person’s ideology falls on the spectrum gives us some insight to his or her opinions. Though people can sometimes be liberal on one issue and conservative on another, a citizen to the left of liberalism, near socialism, would likely be happy with the passage of the Raise the Wage Act of 2015, which would eventually increase the minimum wage from $7.25 to $12 an hour. A citizen falling near conservatism would believe the Patriot Act is reasonable, because it allows the FBI and other government agencies to collect data on citizens’ phone calls and social media communications to monitor potential terrorism. A citizen to the right of the spectrum is more likely to favor cutting social services like unemployment and Medicaid.

A series of bar graphs showing differences in public opinion. The first graph asks
Public opinion on a given issue may differ dramatically depending on the political ideology or party of those polled.

 

Taking a Poll

Most public opinion polls aim to be accurate, but this is not an easy task. Political polling is a science. From design to implementation, polls are complex and require careful planning and care. Mitt Romney’s campaign polls are only a recent example of problems stemming from polling methods. Our history is littered with examples of polling companies producing results that incorrectly predicted public opinion due to poor survey design or bad polling methods.

In 1936, Literary Digest continued its tradition of polling citizens to determine who would win the presidential election. The magazine sent opinion cards to people who had a subscription, a phone, or a car registration. Only some of the recipients sent back their cards. The result? Alf Landon was predicted to win 55.4 percent of the popular vote; in the end, he received only 38 percent.[24]

Franklin D. Roosevelt won another term, but the story demonstrates the need to be scientific in conducting polls.

A few years later, Thomas Dewey lost the 1948 presidential election to Harry Truman, despite polls showing Dewey far ahead and Truman destined to lose. More recently, John Zogby, of Zogby Analytics, went public with his prediction that John Kerry would win the presidency against incumbent president George W. Bush in 2004, only to be proven wrong on election night. These are just a few cases, but each offers a different lesson. In 1948, pollsters did not poll up to the day of the election, relying on old numbers that did not include a late shift in voter opinion. Zogby’s polls did not represent likely voters and incorrectly predicted who would vote and for whom. These examples reinforce the need to use scientific methods when conducting polls, and to be cautious when reporting the results.

Photo shows Harry S. Truman displaying a newspaper whose headline states
Polling process errors can lead to incorrect predictions. On November 3, the day after the 1948 presidential election, a jubilant Harry S. Truman triumphantly displays the inaccurate headline of the Chicago Daily Tribune announcing Thomas Dewey’s supposed victory (credit: David Erickson/Flickr).

Most polling companies employ statisticians and methodologists trained in conducting polls and analyzing data. A number of criteria must be met if a poll is to be completed scientifically. First, the methodologists identify the desired population, or group, of respondents they want to interview. For example, if the goal is to project who will win the presidency, citizens from across the United States should be interviewed. If we wish to understand how voters in Colorado will vote on a proposition, the population of respondents should only be Colorado residents. When surveying on elections or policy matters, many polling houses will interview only respondents who have a history of voting in previous elections, because these voters are more likely to go to the polls on Election Day. Politicians are more likely to be influenced by the opinions of proven voters than of everyday citizens. Once the desired population has been identified, the researchers will begin to build a sample that is both random and representative.

A random sample consists of a limited number of people from the overall population, selected in such a way that each has an equal chance of being chosen. In the early years of polling, telephone numbers of potential respondents were arbitrarily selected from various areas to avoid regional bias. While landline phones allow polls to try to ensure randomness, the increasing use of cell phones makes this process difficult. Cell phones, and their numbers, are portable and move with the owner. To prevent errors, polls that include known cellular numbers may screen for zip codes and other geographic indicators to prevent regional bias. A representative sample consists of a group whose demographic distribution is similar to that of the overall population. For example, nearly 51 percent of the U.S. population is female.[25]

To match this demographic distribution of women, any poll intended to measure what most Americans think about an issue should survey a sample containing slightly more women than men.

Pollsters try to interview a set number of citizens to create a reasonable sample of the population. This sample size will vary based on the size of the population being interviewed and the level of accuracy the pollster wishes to reach. If the poll is trying to reveal the opinion of a state or group, such as the opinion of Wisconsin voters about changes to the education system, the sample size may vary from five hundred to one thousand respondents and produce results with relatively low error. For a poll to predict what Americans think nationally, such as about the White House’s policy on greenhouse gases, the sample size should be larger.

The sample size varies with each organization and institution due to the way the data are processed. Gallup often interviews only five hundred respondents, while Rasmussen Reports and Pew Research often interview one thousand to fifteen hundred respondents.[26] Academic organizations, like the American National Election Studies, have interviews with over twenty-five-hundred respondents.[27]

A larger sample makes a poll more accurate, because it will have relatively fewer unusual responses and be more representative of the actual population. Pollsters do not interview more respondents than necessary, however. Increasing the number of respondents will increase the accuracy of the poll, but once the poll has enough respondents to be representative, increases in accuracy become minor and are not cost-effective.[28]

When the sample represents the actual population, the poll’s accuracy will be reflected in a lower margin of error. The margin of error is a number that states how far the poll results may be from the actual opinion of the total population of citizens. The lower the margin of error, the more predictive the poll. Large margins of error are problematic. For example, if a poll that claims Hillary Clinton is likely to win 30 percent of the vote in the 2016 New York Democratic primary has a margin of error of +/-6, it tells us that Clinton may receive as little as 24 percent of the vote (30 – 6) or as much as 36 percent (30 + 6). A lower of margin of error is clearly desirable because it gives us the most precise picture of what people actually think or will do.

With many polls out there, how do you know whether a poll is a good poll and accurately predicts what a group believes? First, look for the numbers. Polling companies include the margin of error, polling dates, number of respondents, and population sampled to show their scientific reliability. Was the poll recently taken? Is the question clear and unbiased? Was the number of respondents high enough to predict the population? Is the margin of error small? It is worth looking for this valuable information when you interpret poll results. While most polling agencies strive to create quality polls, other organizations want fast results and may prioritize immediate numbers over random and representative samples. For example, instant polling is often used by news networks to quickly assess how well candidates are performing in a debate.

Technology and Polling

The days of randomly walking neighborhoods and phone book cold-calling to interview random citizens are gone. Scientific polling has made interviewing more deliberate. Historically, many polls were conducted in person, yet this was expensive and yielded problematic results.

In some situations and countries, face-to-face interviewing still exists. Exit polls, focus groups, and some public opinion polls occur in which the interviewer and respondents communicate in person. Exit polls are conducted in person, with an interviewer standing near a polling location and requesting information as voters leave the polls. Focus groups often select random respondents from local shopping places or pre-select respondents from Internet or phone surveys. The respondents show up to observe or discuss topics and are then surveyed.

An image of four people standing in front of a table.
On November 6, 2012, the Connect2Mason.com team conducts exit surveys at the polls on the George Mason University campus. (credit: Mason Votes/Flickr).

When organizations like Gallup or Roper decide to conduct face-to-face public opinion polls, however, it is a time-consuming and expensive process. The organization must randomly select households or polling locations within neighborhoods, making sure there is a representative household or location in each neighborhood.[29]

Then it must survey a representative number of neighborhoods from within a city. At a polling location, interviewers may have directions on how to randomly select voters of varied demographics. If the interviewer is looking to interview a person in a home, multiple attempts are made to reach a respondent if he or she does not answer. Gallup conducts face-to-face interviews in areas where less than 80 percent of the households in an area have phones, because it gives a more representative sample.[30]

News networks use face-to-face techniques to conduct exit polls on Election Day.

Most polling now occurs over the phone or through the Internet. Some companies, like Harris Interactive, maintain directories that include registered voters, consumers, or previously interviewed respondents. If pollsters need to interview a particular population, such as political party members or retirees of a specific pension fund, the company may purchase or access a list of phone numbers for that group. Other organizations, like Gallup, use random-digit-dialing (RDD), in which a computer randomly generates phone numbers with desired area codes. Using RDD allows the pollsters to include respondents who may have unlisted and cellular numbers.[31]

Questions about ZIP code or demographics may be asked early in the poll to allow the pollsters to determine which interviews to continue and which to end early.

The interviewing process is also partly computerized. Many polls are now administered through computer-assisted telephone interviewing (CATI) or through robo-polls. A CATI system calls random telephone numbers until it reaches a live person and then connects the potential respondent with a trained interviewer. As the respondent provides answers, the interviewer enters them directly into the computer program. These polls may have some errors if the interviewer enters an incorrect answer. The polls may also have reliability issues if the interviewer goes off the script or answers respondents’ questions.

Robo-polls are entirely computerized. A computer dials random or pre-programmed numbers and a prerecorded electronic voice administers the survey. The respondent listens to the question and possible answers and then presses numbers on the phone to enter responses. Proponents argue that respondents are more honest without an interviewer. However, these polls can suffer from error if the respondent does not use the correct keypad number to answer a question or misunderstands the question. Robo-polls may also have lower response rates, because there is no live person to persuade the respondent to answer. There is also no way to prevent children from answering the survey. Lastly, the Telephone Consumer Protection Act (1991) made automated calls to cell phones illegal, which leaves a large population of potential respondents inaccessible to robo-polls.[32]

The latest challenges in telephone polling come from the shift in phone usage. A growing number of citizens, especially younger citizens, use only cell phones, and their phone numbers are no longer based on geographic areas. The millennial generation (currently aged 18–33) is also more likely to text than to answer an unknown call, so it is harder to interview this demographic group. Polling companies now must reach out to potential respondents using email and social media to ensure they have a representative group of respondents.

Yet, the technology required to move to the Internet and handheld devices presents further problems. Web surveys must be designed to run on a varied number of browsers and handheld devices. Online polls cannot detect whether a person with multiple email accounts or social media profiles answers the same poll multiple times, nor can they tell when a respondent misrepresents demographics in the poll or on a social media profile used in a poll. These factors also make it more difficult to calculate response rates or achieve a representative sample. Yet, many companies are working with these difficulties, because it is necessary to reach younger demographics in order to provide accurate data.[33]

Problems in Polling

For a number of reasons, polls may not produce accurate results. Two important factors a polling company faces are timing and human nature. Unless you conduct an exit poll during an election and interviewers stand at the polling places on Election Day to ask voters how they voted, there is always the possibility the poll results will be wrong. The simplest reason is that if there is time between the poll and Election Day, a citizen might change his or her mind, lie, or choose not to vote at all. Timing is very important during elections, because surprise events can shift enough opinions to change an election result. Of course, there are many other reasons why polls, even those not time-bound by elections or events, may be inaccurate.

Polls begin with a list of carefully written questions. The questions need to be free of framing, meaning they should not be worded to lead respondents to a particular answer. For example, take two questions about presidential approval. Question 1 might ask, “Given the high unemployment rate, do you approve of the job President Obama is doing?” Question 2 might ask, “Do you approve of the job President Obama is doing?” Both questions want to know how respondents perceive the president’s success, but the first question sets up a frame for the respondent to believe the economy is doing poorly before answering. This is likely to make the respondent’s answer more negative. Similarly, the way we refer to an issue or concept can affect the way listeners perceive it. The phrase “estate tax” did not rally voters to protest the inheritance tax, but the phrase “death tax” sparked debate about whether taxing estates imposed a double tax on income.[34]

Many polling companies try to avoid leading questions, which lead respondents to select a predetermined answer, because they want to know what people really think. Some polls, however, have a different goal. Their questions are written to guarantee a specific outcome, perhaps to help a candidate get press coverage or gain momentum. These are called push polls. In the 2016 presidential primary race, MoveOn tried to encourage Senator Elizabeth Warren (D-MA) to enter the race for the Democratic nomination. Its poll used leading questions for what it termed an “informed ballot,” and, to show that Warren would do better than Hillary Clinton, it included ten positive statements about Warren before asking whether the respondent would vote for Clinton or Warren.[35]

The poll results were blasted by some in the media for being fake.

Photo A shows Joseph P. Kennedy, Elizabeth Warren, and Barney Frank. Image B shows Hillary Clinton at a podium.
Senator Elizabeth Warren (a) poses with Massachusetts representatives Joseph P. Kennedy III (left) and Barney Frank (right) at the 2012 Boston Pride Parade. Senator Hillary Clinton (b) during her 2008 presidential campaign in Concord, New Hampshire (credit a: modification of work by “ElizabethForMA”/Flickr; credit b: modification of work by Marc Nozell)

Sometimes lack of knowledge affects the results of a poll. Respondents may not know that much about the polling topic but are unwilling to say, “I don’t know.” For this reason, surveys may contain a quiz with questions that determine whether the respondent knows enough about the situation to answer survey questions accurately. A poll to discover whether citizens support changes to the Affordable Care Act or Medicaid might first ask who these programs serve and how they are funded. Polls about territory seizure by the Islamic State (or ISIS) or Russia’s aid to rebels in Ukraine may include a set of questions to determine whether the respondent reads or hears any international news. Respondents who cannot answer correctly may be excluded from the poll, or their answers may be separated from the others.

People may also feel social pressure to answer questions in accordance with the norms of their area or peers.[36]

If they are embarrassed to admit how they would vote, they may lie to the interviewer. In the 1982 governor’s race in California, Tom Bradley was far ahead in the polls, yet on Election Day he lost. This result was nicknamed the Bradley effect, on the theory that voters who answered the poll were afraid to admit they would not vote for a black man because it would appear politically incorrect and racist.

In 2010, Proposition 19, which would have legalized and taxed marijuana in California, met with a new version of the Bradley effect. Nate Silver, a political blogger, noticed that polls on the marijuana proposition were inconsistent, sometimes showing the proposition would pass and other times showing it would fail. Silver compared the polls and the way they were administered, because some polling companies used an interviewer and some used robo-calling. He then proposed that voters speaking with a live interviewer gave the socially acceptable answer that they would vote against Proposition 19, while voters interviewed by a computer felt free to be honest.[37]

While this theory has not been proven, it is consistent with other findings that interviewer demographics can affect respondents’ answers. African Americans, for example, may give different responses to interviewers who are white than to interviewers who are black.[38]

Chart shows the support of marijuana legalization by the type of poll conducted. When using a live operator poll, opposition is about –2 for Reuters/lpsos, about –1 for PPIC, and about –4 for Field Poll. The results from robo-polls show favorability at about 14 for Survey USA (April), about 10 for Survey USA (July) and about 16 for PPP. At the bottom of the chart, a source is cited:
In 2010, polls about California’s Proposition 19 were inconsistent, depending on how they were administered, with voters who spoke with a live interviewer declaring they would vote against Proposition 19 and voters who were interviewed via a computer declaring support for the legislation. The measure was defeated on Election Day.

Push Polls

One of the newer byproducts of polling is the creation of push polls, which consist of political campaign information presented as polls. A respondent is called and asked a series of questions about his or her position or candidate selections. If the respondent’s answers are for the wrong candidate, the next questions will give negative information about the candidate in an effort to change the voter’s mind.

In 2014, a fracking ban was placed on the ballot in a town in Texas. Fracking, which includes injecting pressurized water into drilled wells, helps energy companies collect additional gas from the earth. It is controversial, with opponents arguing it causes water pollution, sound pollution, and earthquakes. During the campaign, a number of local voters received a call that polled them on how they planned to vote on the proposed fracking ban.[39]

If the respondent was unsure about or planned to vote for the ban, the questions shifted to provide negative information about the organizations proposing the ban. One question asked, “If you knew the following, would it change your vote . . . two Texas railroad commissioners, the state agency that oversees oil and gas in Texas, have raised concerns about Russia’s involvement in the anti-fracking efforts in the U.S.?” The question played upon voter fears about Russia and international instability in order to convince them to vote against the fracking ban.

These techniques are not limited to issue votes; candidates have used them to attack their opponents. The hope is that voters will think the poll is legitimate and believe the negative information provided by a “neutral” source.

Public Opinion and Elections

Elections are the events on which opinion polls have the greatest measured effect. Public opinion polls do more than show how we feel on issues or project who might win an election. The media use public opinion polls to decide which candidates are ahead of the others and therefore of interest to voters and worthy of interview. From the moment President Obama was inaugurated for his second term, speculation began about who would run in the 2016 presidential election. Within a year, potential candidates were being ranked and compared by a number of newspapers.[40]

The speculation included favorability polls on Hillary Clinton, which measured how positively voters felt about her as a candidate. The media deemed these polls important because they showed Clinton as the frontrunner for the Democrats in the next election.[41]

During presidential primary season, we see examples of the bandwagon effect, in which the media pays more attention to candidates who poll well during the fall and the first few primaries. Bill Clinton was nicknamed the “Comeback Kid” in 1992, after he placed second in the New Hampshire primary despite accusations of adultery with Gennifer Flowers. The media’s attention on Clinton gave him the momentum to make it through the rest of the primary season, ultimately winning the Democratic nomination and the presidency.

Polling is also at the heart of horserace coverage, in which, just like an announcer at the racetrack, the media calls out every candidate’s move throughout the presidential campaign. Horserace coverage can be neutral, positive, or negative, depending upon what polls or facts are covered. During the 2012 presidential election, the Pew Research Center found that both Mitt Romney and President Obama received more negative than positive horserace coverage, with Romney’s growing more negative as he fell in the polls.[42]

Horserace coverage is often criticized for its lack of depth; the stories skip over the candidates’ issue positions, voting histories, and other facts that would help voters make an informed decision. Yet, horserace coverage is popular because the public is always interested in who will win, and it often makes up a third or more of news stories about the election.[43]

Exit polls, taken the day of the election, are the last election polls conducted by the media. Announced results of these surveys can deter voters from going to the polls if they believe the election has already been decided.

Photo shows Donald Trump speaking at a podium.
In 2016, Republican presidential candidate Donald Trump became the center of the media’s horserace coverage. As the field winnowed from over twenty candidates down to three, the media incessantly compared everyone else in the field to Trump. (credit: Max Goldberg)

Public opinion polls also affect how much money candidates receive in campaign donations. Donors assume public opinion polls are accurate enough to determine who the top two to three primary candidates will be, and they give money to those who do well. Candidates who poll at the bottom will have a hard time collecting donations, increasing the odds that they will continue to do poorly. This was apparent in the run-up to the 2016 presidential election. Bernie Sanders, Hillary Clinton, and Martin O’Malley each campaigned in the hope of becoming the Democratic presidential nominee. In June 2015, 75 percent of Democrats likely to vote in their state primaries said they would vote for Clinton, while 15 percent of those polled said they would vote for Sanders. Only 2 percent said they would vote for O’Malley.[44]

During this same period, Clinton raised $47 million in campaign donations, Sanders raised $15 million, and O’Malley raised $2 million.[45]

By September 2015, 23 percent of likely Democratic voters said they would vote for Sanders,[46] and his summer fundraising total increased accordingly.[47]

Presidents running for reelection also must perform well in public opinion polls, and being in office may not provide an automatic advantage. Americans often think about both the future and the past when they decide which candidate to support.[48]

They have three years of past information about the sitting president, so they can better predict what will happen if the incumbent is reelected. That makes it difficult for the president to mislead the electorate. Voters also want a future that is prosperous. Not only should the economy look good, but citizens want to know they will do well in that economy.[49]

For this reason, daily public approval polls sometimes act as both a referendum of the president and a predictor of success.

Public Opinion and Government

The relationship between public opinion polls and government action is murkier than that between polls and elections. Like the news media and campaign staffers, members of the three branches of government are aware of public opinion. But do politicians use public opinion polls to guide their decisions and actions?

The short answer is “sometimes.” The public is not perfectly informed about politics, so politicians realize public opinion may not always be the right choice. Yet many political studies, from the American Voter in the 1920s to the American Voter Revisited in the 2000s, have found that voters behave rationally despite having limited information. Individual citizens do not take the time to become fully informed about all aspects of politics, yet their collective behavior and the opinions they hold as a group make sense. They appear to be informed just enough, using preferences like their political ideology and party membership, to make decisions and hold politicians accountable during an election year.

Overall, the collective public opinion of a country changes over time, even if party membership or ideology does not change dramatically. As James Stimson’s prominent study found, the public’s mood, or collective opinion, can become more or less liberal from decade to decade. While the initial study on public mood revealed that the economy has a profound effect on American opinion,[50]

further studies have gone beyond to determine whether public opinion, and its relative liberalness, in turn affect politicians and institutions. This idea does not argue that opinion never affects policy directly, rather that collective opinion also affects the politician’s decisions on policy.[51]

Individually, of course, politicians cannot predict what will happen in the future or who will oppose them in the next few elections. They can look to see where the public is in agreement as a body. If public mood changes, the politicians may change positions to match the public mood. The more savvy politicians look carefully to recognize when shifts occur. When the public is more or less liberal, the politicians may make slight adjustments to their behavior to match. Politicians who frequently seek to win office, like House members, will pay attention to the long- and short-term changes in opinion. By doing this, they will be less likely to lose on Election Day.[52] Presidents and justices, on the other hand, present a more complex picture.

Public opinion of the president is different from public opinion of Congress. Congress is an institution of 535 members, and opinion polls look at both the institution and its individual members. The president is both a person and the head of an institution. The media pays close attention to any president’s actions, and the public is generally well informed and aware of the office and its current occupant. Perhaps this is why public opinion has an inconsistent effect on presidents’ decisions. As early as Franklin D. Roosevelt’s administration in the 1930s, presidents have regularly polled the public, and since Richard Nixon’s term (1969–1974), they have admitted to using polling as part of the decision-making process.

Presidential responsiveness to public opinion has been measured in a number of ways, each of which tells us something about the effect of opinion. One study examined whether presidents responded to public opinion by determining how often they wrote amicus briefs and asked the court to affirm or reverse cases. It found that the public’s liberal (or non-liberal) mood had an effect, causing presidents to pursue and file briefs in different cases.[53]

But another author found that the public’s level of liberalness is ignored when conservative presidents, such as Ronald Reagan or George W. Bush, are elected and try to lead. In one example, our five most recent presidents’ moods varied from liberal to non-liberal, while public sentiment stayed consistently liberal.[54]

While the public supported liberal approaches to policy, presidential action varied from liberal to non-liberal.

Overall, it appears that presidents try to move public opinion towards personal positions rather than moving themselves towards the public’s opinion.[55]

If presidents have enough public support, they use their level of public approval indirectly as a way to get their agenda passed. Immediately following Inauguration Day, for example, the president enjoys the highest level of public support for implementing campaign promises. This is especially true if the president has a mandate, which is more than half the popular vote. Barack Obama’s recent 2008 victory was a mandate with 52.9 percent of the popular vote and 67.8 percent of the Electoral College vote.[56]

When presidents have high levels of public approval, they are likely to act quickly and try to accomplish personal policy goals. They can use their position and power to focus media attention on an issue. This is sometimes referred to as the bully pulpit approach. The term “bully pulpit” was coined by President Theodore Roosevelt, who believed the presidency commanded the attention of the media and could be used to appeal directly to the people. Roosevelt used his position to convince voters to pressure Congress to pass laws.

Increasing partisanship has made it more difficult for presidents to use their power to get their own preferred issues through Congress, however, especially when the president’s party is in the minority in Congress.[57]

For this reason, modern presidents may find more success in using their popularity to increase media and social media attention on an issue. Even if the president is not the reason for congressional action, he or she can cause the attention that leads to change.[58]

Presidents may also use their popularity to ask the people to act. In October 2015, following a shooting at Umpqua Community College in Oregon, President Obama gave a short speech from the West Wing of the White House. After offering his condolences and prayers to the community, he remarked that prayers and condolences were no longer enough, and he called on citizens to push Congress for a change in gun control laws. President Obama had proposed gun control reform following the 2012 shooting at Sandy Hook Elementary in Connecticut, but it did not pass Congress. This time, the president asked citizens to use gun control as a voting issue and push for reform via the ballot box.

Photo shows President Obama giving a Press room briefing in the White House.
In the wake of a shooting at Umpqua Community College in Oregon in October 2015, President Obama called for a change in gun control laws (credit: The White House).

In some instances, presidents may appear to directly consider public opinion before acting or making decisions. In 2013, President Obama announced that he was considering a military strike on Syria in reaction to the Syrian government’s illegal use of sarin gas on its own citizens. Despite agreeing that this chemical attack on the Damascan suburbs was a war crime, the public was against U.S. involvement. Forty-eight percent of respondents said they opposed airstrikes, and only 29 percent were in favor. Democrats were especially opposed to military intervention.[59]

President Obama changed his mind and ultimately allowed Russian president Vladimir Putin to negotiate Syria’s surrender of its chemical weapons.

However, further examples show that presidents do not consistently listen to public opinion. After taking office in 2009, President Obama did not order the closing of Guantanamo Bay prison, even though his proposal to do so had garnered support during the 2008 election. President Bush, despite growing public disapproval for the war in Iraq, did not end military support in Iraq after 2006. And President Bill Clinton, whose White House pollsters were infamous for polling on everything, sometimes ignored the public if circumstances warranted.[60]

In 1995, despite public opposition, Clinton guaranteed loans for the Mexican government to help the country out of financial insolvency. He followed this decision with many speeches to help the American public understand the importance of stabilizing Mexico’s economy. Individual examples like these make it difficult to persuasively identify the direct effects of public opinion on the presidency.

While presidents have at most only two terms to serve and work, members of Congress can serve as long as the public returns them to office. We might think that for this reason public opinion is important to representatives and senators, and that their behavior, such as their votes on domestic programs or funding, will change to match the expectation of the public. In a more liberal time, the public may expect to see more social programs. In a non-liberal time, the public mood may favor austerity, or decreased government spending on programs. Failure to recognize shifts in public opinion may lead to a politician’s losing the next election.[61]

House of Representatives members, with a two-year term, have a more difficult time recovering from decisions that anger local voters. And because most representatives continually fundraise, unpopular decisions can hurt their campaign donations. For these reasons, it seems representatives should be susceptible to polling pressure. Yet one study, by James Stimson, found that the public mood does not directly affect elections, and shifts in public opinion do not predict whether a House member will win or lose. These elections are affected by the president on the ticket, presidential popularity (or lack thereof) during a midterm election, and the perks of incumbency, such as name recognition and media coverage. In fact, a later study confirmed that the incumbency effect is highly predictive of a win, and public opinion is not.[62]

In spite of this, we still see policy shifts in Congress, often matching the policy preferences of the public. When the shifts happen within the House, they are measured by the way members vote. The study’s authors hypothesize that House members alter their votes to match the public mood, perhaps in an effort to strengthen their electoral chances.[63]

The Senate is quite different from the House. Senators do not enjoy the same benefits of incumbency, and they win reelection at lower rates than House members. Yet, they do have one advantage over their colleagues in the House: Senators hold six-year terms, which gives them time to engage in fence-mending to repair the damage from unpopular decisions. In the Senate, Stimson’s study confirmed that opinion affects a senator’s chances at reelection, even though it did not affect House members. Specifically, the study shows that when public opinion shifts, fewer senators win reelection. Thus, when the public as a whole becomes more or less liberal, new senators are elected. Rather than the senators shifting their policy preferences and voting differently, it is the new senators who change the policy direction of the Senate.[64]

Beyond voter polls, congressional representatives are also very interested in polls that reveal the wishes of interest groups and businesses. If AARP, one of the largest and most active groups of voters in the United States, is unhappy with a bill, members of the relevant congressional committees will take that response into consideration. If the pharmaceutical or oil industry is unhappy with a new patent or tax policy, its members’ opinions will have some effect on representatives’ decisions, since these industries contribute heavily to election campaigns.

There is some disagreement about whether the Supreme Court follows public opinion or shapes it. The lifetime tenure the justices enjoy was designed to remove everyday politics from their decisions, protect them from swings in political partisanship, and allow them to choose whether and when to listen to public opinion. More often than not, the public is unaware of the Supreme Court’s decisions and opinions. When the justices accept controversial cases, the media tune in and ask questions, raising public awareness and affecting opinion. But do the justices pay attention to the polls when they make decisions?

Studies that look at the connection between the Supreme Court and public opinion are contradictory. Early on, it was believed that justices were like other citizens: individuals with attitudes and beliefs who would be affected by political shifts.[65]

Later studies argued that Supreme Court justices rule in ways that maintain support for the institution. Instead of looking at the short term and making decisions day to day, justices are strategic in their planning and make decisions for the long term.[66]

Other studies have revealed a more complex relationship between public opinion and judicial decisions, largely due to the difficulty of measuring where the effect can be seen. Some studies look at the number of reversals taken by the Supreme Court, which are decisions with which the Court overturns the decision of a lower court. In one study, the authors found that public opinion slightly affects cases accepted by the justices.[67]

In a study looking at how often the justices voted liberally on a decision, a stronger effect of public opinion was revealed.[68]

Whether the case or court is currently in the news may also matter. A study found that if the majority of Americans agree on a policy or issue before the court, the court’s decision is likely to agree with public opinion.[69]

A second study determined that public opinion is more likely to affect ignored cases than heavily reported ones.[70]

In these situations, the court was also more likely to rule with the majority opinion than against it. For example, in Town of Greece v. Galloway (2014), a majority of the justices decided that ceremonial prayer before a town meeting was not a violation of the Establishment Clause.[71]

The fact that 78 percent of U.S. adults recently said religion is fairly to very important to their lives
Gallup. 2015.[72] and 61 percent supported prayer in school[73] may explain why public support for the Supreme Court did not fall after this decision.[74]

Overall, however, it is clear that public opinion has a less powerful effect on the courts than on the other branches and on politicians.[75]

Perhaps this is due to the lack of elections or justices’ lifetime tenure, or perhaps we have not determined the best way to measure the effects of public opinion on the Court.


  1. Gallup. 2015. "Gallup Daily: Obama Job Approval." Gallup. June 6, 2015. http://www.gallup.com/poll/113980/Gallup-Daily-Obama-Job-Approval.aspx (February 17, 2016); Rasmussen Reports. 2015. "Daily Presidential Tracking Poll." Rasmussen Reports June 6, 2015. http://www.rasmussenreports.com/public_content/politics/obama_administration/daily_presidential_tracking_poll (February 17, 2016); Roper Center. 2015. "Obama Presidential Approval." Roper Center. June 6, 2015. http://www.ropercenter.uconn.edu/polls/presidential-approval/ (February 17, 2016).
  2. V. O. Key, Jr. 1966. The Responsible Electorate. Harvard University: Belknap Press.
  3. John Zaller. 1992. The Nature and Origins of Mass Opinion. Cambridge: Cambridge University Press.
  4. Eitan Hersh. 2013. "Long-Term Effect of September 11 on the Political Behavior of Victims’ Families and Neighbors." Proceedings of the National Academy of Sciences of the United States of America 110 (52): 20959–63.
  5. M. Kent Jennings. 2002. "Generation Units and the Student Protest Movement in the United States: An Intra- and Intergenerational Analysis." Political Psychology 23 (2): 303–324.
  6. United States Senate. 2015. "Party Division in the Senate, 1789-Present," United States Senate. June 5, 2015. http://www.senate.gov/pagelayout/history/one_item_and_teasers/partydiv.htm (February 17, 2016). History, Art & Archives. 2015. "Party Divisions of the House of Representatives: 1789–Present." United States House of Representatives. June 5, 2015. http://history.house.gov/Institution/Party-Divisions/Party-Divisions/ (February 17, 2016).
  7. V. O. Key Jr. 1955. "A Theory of Critical Elections." Journal of Politics 17 (1): 3–18.
  8. Pew Research Center. 2014. "Political Polarization in the American Public." Pew Research Center. June 12, 2014. http://www.people-press.org/2014/06/12/political-polarization-in-the-american-public/ (February 17, 2016).
  9. Pew Research Center. 2015. "American Values Survey." Pew Research Center. http://www.people-press.org/values-questions/ (February 17, 2016).
  10. Virginia Chanley. 2002. "Trust in Government in the Aftermath of 9/11: Determinants and Consequences." Political Psychology 23 (3): 469–483.
  11. Deborah Schildkraut. 2002. "The More Things Change... American Identity and Mass and Elite Responses to 9/11." Political Psychology 23 (3): 532.
  12. Joseph Bafumi and Robert Shapiro. 2009. "A New Partisan Voter." The Journal of Politics 71 (1): 1–24.
  13. Liz Marlantes, "After 9/11, the Body Politic Tilts to Conservatism," Christian Science Monitor, 16 January 2002.
  14. Liping Weng. 2010. "Shanghai Children’s Value Socialization and Its Change: A Comparative Analysis of Primary School Textbooks." China Media Research 6 (3): 36–43.
  15. David Easton. 1965. A Systems Analysis of Political Life. New York: John Wiley.
  16. Angus Campbell, Philip Converse, Warren Miller, and Donald Stokes. 2008. The American Voter: Unabridged Edition. Chicago: University of Chicago Press. Michael S. Lewis-Beck, William G. Jacoby, Helmut Norpoth, and Herbert F. Weisberg. 2008. American Vote Revisited. Ann Arbor: University of Michigan Press.
  17. Russell Dalton. 1980. "Reassessing Parental Socialization: Indicator Unreliability versus Generational Transfer." American Political Science Review 74 (2): 421–431.
  18. Michael S. Lewis-Beck, William G. Jacoby, Helmut Norpoth, and Herbert F. Weisberg. 2008. American Vote Revisited. Ann Arbor: University of Michigan Press.
  19. Michael Lipka. 2013. "What Surveys Say about Workshop Attendance—and Why Some Stay Home." Pew Research Center. September 13, 2013. http://www.pewresearch.org/fact-tank/2013/09/13/what-surveys-say-about-worship-attendance-and-why-some-stay-home/ (February 17, 2016).
  20. Arthur Lupia and Mathew D. McCubbins. 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know? New York: Cambridge University Press. John Barry Ryan. 2011. "Social Networks as a Shortcut to Correct Voting." American Journal of Political Science 55 (4): 753–766.
  21. Sarah Bowen. 2015. "A Framing Analysis of Media Coverage of the Rodney King Incident and Ferguson, Missouri, Conflicts." Elon Journal of Undergraduate Research in Communications 6 (1): 114–124.
  22. Frederick Engels. 1847. The Principles of Communism. Trans. Paul Sweezy. https://www.marxists.org/archive/marx/works/1847/11/prin-com.htm (February 17, 2016).
  23. Libertarian Party. 2014. "Libertarian Party Platform." June. http://www.lp.org/platform (February 17, 2016).
  24. Arthur Evans, "Predict Landon Electoral Vote to be 315 to 350," Chicago Tribune, 18 October 1936.
  25. United States Census Bureau. 2012. "Age and Sex Composition in the United States: 2012." United States Census Bureau. http://www.census.gov/population/age/data/2012comp.html (February 17, 2016).
  26. Rasmussen Reports. 2015. "Daily Presidential Tracking Poll." Rasmussen Reports. September 27, 2015. http://www.rasmussenreports.com/public_content/politics/obama_administration/daily_presidential_tracking_poll (February 17, 2016); Pew Research Center. 2015. "Sampling." Pew Research Center. http://www.pewresearch.org/methodology/u-s-survey-research/sampling/ (February 17, 2016).
  27. American National Election Studies Data Center. 2016. http://electionstudies.org/studypages/download/datacenter_all_NoData.php (February 17, 2016).
  28. Michael W. Link and Robert W. Oldendick. 1997. "Good" Polls / "Bad" Polls—How Can You Tell? Ten Tips for Consumers of Survey Research." South Carolina Policy Forum. http://www.ipspr.sc.edu/publication/Link.htm (February 17, 2016); Pew Research Center. 2015. "Sampling." Pew Research Center. http://www.pewresearch.org/methodology/u-s-survey-research/sampling/ (February 17, 2016).
  29. "Roper Center. 2015. "Polling Fundamentals – Sampling." Roper. http://www.ropercenter.uconn.edu/support/polling-fundamentals-sampling/ (February 17, 2016).
  30. Gallup. 2015. "How Does the Gallup World Poll Work?" Gallup. http://www.gallup.com/178667/gallup-world-poll-work.aspx (February 17, 2016).
  31. Gallup. 2015. "Does Gallup Call Cellphones?" Gallup. http://www.gallup.com/poll/110383/does-gallup-call-cell-phones.aspx (February 17, 2016).
  32. Mark Blumenthal, "The Case for Robo-Pollsters: Automated Interviewers Have Their Drawbacks, But Fewer Than Their Critics Suggest," National Journal, 14 September 2009.
  33. Mark Blumenthal, "Is Polling As We Know It Doomed?" National Journal, 10 August 2009.
  34. Frank Luntz. 2007. Words That Work: It’s Not What You Say, It’s What People Hear. New York: Hyperion.
  35. Aaron Blake, "This terrible polls shows Elizabeth Warren beating Hillary Clinton," Washington Post, 11 February 2015.
  36. Nate Silver. 2010. "The Broadus Effect? Social Desirability Bias and California Proposition 19." FiveThirtyEightPolitics. July 27, 2010. http://fivethirtyeight.com/features/broadus-effect-social-desirability-bias/ (February 18, 2016).
  37. Nate Silver. 2010. "The Broadus Effect? Social Desirability Bias and California Proposition 19." FiveThirtyEightPolitics. July 27, 2010. http://fivethirtyeight.com/features/broadus-effect-social-desirability-bias/ (February 18, 2016).
  38. D. Davis. 1997. "The Direction of Race of Interviewer Effects among African-Americans: Donning the Black Mask." American Journal of Political Science 41 (1): 309–322.
  39. Kate Sheppard, "Top Texas Regulator: Could Russia be Behind City’s Proposed Fracking Ban?" Huffington Post, 16 July 2014. http://www.huffingtonpost.com/2014/07/16/fracking-ban-denton-russia_n_5592661.html (February 18, 2016).
  40. Paul Hitlin. 2013. "The 2016 Presidential Media Primary Is Off to a Fast Start." Pew Research Center. October 3, 2013. http://www.pewresearch.org/fact-tank/2013/10/03/the-2016-presidential-media-primary-is-off-to-a-fast-start/ (February 18, 2016).
  41. Pew Research Center, 2015. "Hillary Clinton’s Favorability Ratings over Her Career." Pew Research Center. June 6, 2015. http://www.pewresearch.org/wp-content/themes/pewresearch/static/hillary-clintons-favorability-ratings-over-her-career/ (February 18, 2016).
  42. Pew Research Center. 2012. "Winning the Media Campaign." Pew Research Center. November 2, 2012. http://www.journalism.org/2012/11/02/winning-media-campaign-2012/ (February 18, 2016).
  43. Pew Research Center. 2012. "Fewer Horserace Stories-and Fewer Positive Obama Stories-Than in 2008." Pew Research Center. November 2, 2012. http://www.journalism.org/2012/11/01/press-release-6/ (February 18, 2016).
  44. Patrick O’Connor. 2015. "WSJ/NBC Poll Finds Hillary Clinton in a Strong Position." Wall Street Journal. June 23, 2015. http://www.wsj.com/articles/new-poll-finds-hillary-clinton-tops-gop-presidential-rivals-1435012049.
  45. Federal Elections Commission. 2015. "Presidential Receipts." http://www.fec.gov/press/summaries/2016/tables/presidential/presreceipts_2015_q2.pdf (February 18, 2016).
  46. Susan Page and Paulina Firozi, "Poll: Hillary Clinton Still Leads Sanders and Biden But By Less," USA Today, 1 October 2015.
  47. Dan Merica, and Jeff Zeleny. 2015. "Bernie Sanders Nearly Outraises Clinton, Each Post More Than $20 Million." CNN. October 1, 2015. http://www.cnn.com/2015/09/30/politics/bernie-sanders-hillary-clinton-fundraising/index.html?eref=rss_politics (February 18, 2016).
  48. Robert S. Erikson, Michael B. MacKuen, and James A. Stimson. 2000. "Bankers or Peasants Revisited: Economic Expectations and Presidential Approval." Electoral Studies 19: 295–312.
  49. Erikson et al, "Bankers or Peasants Revisited: Economic Expectations and Presidential Approval.
  50. Michael B. MacKuen, Robert S. Erikson, and James A. Stimson. 1989. "Macropartisanship." American Political Science Review 83 (4): 1125–1142.
  51. James A. Stimson, Michael B. Mackuen, and Robert S. Erikson. 1995. "Dynamic Representation." American Political Science Review 89 (3): 543–565.
  52. Stimson et al, "Dynamic Representation."
  53. Stimson et al, "Dynamic Representation."
  54. Dan Wood. 2009. Myth of Presidential Representation. New York: Cambridge University Press, 96-97.
  55. Wood, Myth of Presidential Representation.
  56. U.S. Election Atlas. 2015. "United States Presidential Election Results." U.S. Election Atlas. June 22, 2015. http://uselectionatlas.org/RESULTS/ (February 18, 2016).
  57. Richard Fleisher, and Jon R. Bond. 1996. "The President in a More Partisan Legislative Arena." Political Research Quarterly 49 no. 4 (1996): 729–748.
  58. George C. Edwards III, and B. Dan Wood. 1999. "Who Influences Whom? The President, Congress, and the Media." American Political Science Review 93 (2): 327–344.
  59. Pew Research Center. 2013. "Public Opinion Runs Against Syrian Airstrikes." Pew Research Center. September 4, 2013. http://www.people-press.org/2013/09/03/public-opinion-runs-against-syrian-airstrikes/ (February 18, 2016).
  60. Paul Bedard. 2013. "Poll-Crazed Clinton Even Polled on His Dog’s Name." Washington Examiner. April 30, 2013. http://www.washingtonexaminer.com/poll-crazed-bill-clinton-even-polled-on-his-dogs-name/article/2528486.
  61. Stimson et al, "Dynamic Representation."
  62. Suzanna De Boef, and James A. Stimson. 1995. "The Dynamic Structure of Congressional Elections." Journal of Politics 57 (3): 630–648.
  63. Stimson et al, "Dynamic Representation."
  64. Stimson et al, "Dynamic Representation."
  65. Benjamin Cardozo. 1921. The Nature of the Judicial Process. New Haven: Yale University Press.
  66. Jack Knight, and Lee Epstein. 1998. The Choices Justices Make. Washington DC: CQ Press.
  67. Kevin T. Mcguire, Georg Vanberg, Charles E Smith, and Gregory A. Caldeira. 2009. "Measuring Policy Content on the U.S. Supreme Court." Journal of Politics 71 (4): 1305–1321.
  68. Kevin T. McGuire, and James A. Stimson. 2004. "The Least Dangerous Branch Revisited: New Evidence on Supreme Court Responsiveness to Public Preferences." Journal of Politics 66 (4): 1018–1035.
  69. Thomas Marshall. 1989. Public Opinion and the Supreme Court. Boston: Unwin Hyman.
  70. Christopher J. Casillas, Peter K. Enns, and Patrick C. Wohlfarth. 2011. "How Public Opinion Constrains the U.S. Supreme Court." American Journal of Political Science 55 (1): 74–88.
  71. Town of Greece v. Galloway 572 U.S. ___ (2014).
  72. "Religion." Gallup. June 18, 2015. http://www.gallup.com/poll/1690/Religion.aspx (February 18, 2016).
  73. Rebecca Riffkin. 2015. "In U.S., Support for Daily Prayer in Schools Dips Slightly." Gallup. September 25, 2015. http://www.gallup.com/poll/177401/support-daily-prayer-schools-dips-slightly.aspx.
  74. Gallup. 2015. "Supreme Court." Gallup. http://www.gallup.com/poll/4732/supreme-court.aspx (February 18, 2016).
  75. Stimson et al, "Dynamic Representation."

License

Icon for the Creative Commons Attribution 4.0 International License

Texas Government Copyright © by Lumen Learning is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book