12 Chapter 9 Survey Research
Survey research a research method involving the use of standardized questionnaires or interviews to collect data about people and their preferences, thoughts, and behaviors in a systematic manner. Although census surveys were conducted as early as Ancient Egypt, survey as a formal research method was pioneered in the 1930-40s by sociologist Paul Lazarsfeld to examine the effects of the radio on political opinion formation of the United States. This method has since become a very popular method for quantitative research in the social sciences.
The survey method can be used for descriptive, exploratory, or explanatory research. This method is best suited for studies that have individual people as the unit of analysis. Although other units of analysis, such as groups, organizations or dyads (pairs of organizations, such as buyers and sellers), are also studied using surveys, such studies often use a specific person from each unit as a “key informant” or a “proxy” for that unit, and such surveys may be subject to respondent bias if the informant chosen does not have adequate knowledge or has a biased opinion about the phenomenon of interest. For instance, Chief Executive Officers may not adequately know employee’s perceptions or teamwork in their own companies, and may therefore be the wrong informant for studies of team dynamics or employee self-esteem.
Survey research has several inherent strengths compared to other research methods. First, surveys are an excellent vehicle for measuring a wide variety of unobservable data, such as people’s preferences (e.g., political orientation), traits (e.g., self-esteem), attitudes (e.g., toward immigrants), beliefs (e.g., about a new law), behaviors (e.g., smoking or drinking behavior), or factual information (e.g., income). Second, survey research is also ideally suited for remotely collecting data about a population that is too large to observe directly. A large area, such as an entire country, can be covered using mail-in, electronic mail, or telephone surveys using meticulous sampling to ensure that the population is adequately represented in a small sample. Third, due to their unobtrusive nature and the ability to respond at one’s convenience, questionnaire surveys are preferred by some respondents. Fourth, interviews may be the only way of reaching certain population groups such as the homeless or illegal immigrants for which there is no sampling frame available. Fifth, large sample surveys may allow detection of small effects even while analyzing multiple variables, and depending on the survey design, may also allow comparative analysis of population subgroups (i.e., within-group and between-group analysis). Sixth, survey research is economical in terms of researcher time, effort and cost than most other methods such as experimental research and case research. At the same time, survey research also has some unique disadvantages. It is subject to a large number of biases such as non-response bias, sampling bias, social desirability bias, and recall bias, as discussed in the last section of this chapter.
Depending on how the data is collected, survey research can be divided into two broad categories: questionnaire surveys (which may be mail-in, group-administered, or online surveys), and interview surveys (which may be personal, telephone, or focus group interviews). Questionnaires are instruments that are completed in writing by respondents, while interviews are completed by the interviewer based on verbal responses provided by respondents. As discussed below, each type has its own strengths and weaknesses, in terms of their costs, coverage of the target population, and researcher’s flexibility in asking questions.
Questionnaire Surveys
Invented by Sir Francis Galton, a questionnaire is a research instrument consisting of a set of questions (items) intended to capture responses from respondents in a standardized manner. Questions may be unstructured or structured. Unstructured questions ask respondents to provide a response in their own words, while structured questions ask respondents to select an answer from a given set of choices. Subjects’ responses to individual questions (items) on a structured questionnaire may be aggregated into a composite scale or index for statistical analysis. Questions should be designed such that respondents are able to read, understand, and respond to them in a meaningful way, and hence the survey method may not be appropriate or practical for certain demographic groups such as children or the illiterate.
Most questionnaire surveys tend to be self-administered mail surveys , where the same questionnaire is mailed to a large number of people, and willing respondents can complete the survey at their convenience and return it in postage-prepaid envelopes. Mail surveys are advantageous in that they are unobtrusive, and they are inexpensive to administer, since bulk postage is cheap in most countries. However, response rates from mail surveys tend to be quite low since most people tend to ignore survey requests. There may also be long delays (several months) in respondents’ completing and returning the survey (or they may simply lose it). Hence, the researcher must continuously monitor responses as they are being returned, track and send reminders to non-respondents repeated reminders (two or three reminders at intervals of one to 1.5 months is ideal). Questionnaire surveys are also not well-suited for issues that require clarification on the part of the respondent or those that require detailed written responses. Longitudinal designs can be used to survey the same set of respondents at different times, but response rates tend to fall precipitously from one survey to the next.
A second type of survey is group-administered questionnaire . A sample of respondents is brought together at a common place and time, and each respondent is asked to complete the survey questionnaire while in that room. Respondents enter their responses independently without interacting with each other. This format is convenient for the researcher, and high response rate is assured. If respondents do not understand any specific question, they can ask for clarification. In many organizations, it is relatively easy to assemble a group of employees in a conference room or lunch room, especially if the survey is approved by corporate executives.
A more recent type of questionnaire survey is an online or web survey. These surveys are administered over the Internet using interactive forms. Respondents may receive an electronic mail request for participation in the survey with a link to an online website where the survey may be completed. Alternatively, the survey may be embedded into an e-mail, and can be completed and returned via e-mail. These surveys are very inexpensive to administer, results are instantly recorded in an online database, and the survey can be easily modified if needed. However, if the survey website is not password-protected or designed to prevent multiple submissions, the responses can be easily compromised. Furthermore, sampling bias may be a significant issue since the survey cannot reach people that do not have computer or Internet access, such as many of the poor, senior, and minority groups, and the respondent sample is skewed toward an younger demographic who are online much of the time and have the time and ability to complete such surveys. Computing the response rate may be problematic, if the survey link is posted on listservs or bulletin boards instead of being e-mailed directly to targeted respondents. For these reasons, many researchers prefer dual-media surveys (e.g., mail survey and online survey), allowing respondents to select their preferred method of response.
Constructing a survey questionnaire is an art. Numerous decisions must be made about the content of questions, their wording, format, and sequencing, all of which can have important consequences for the survey responses.
Response formats. Survey questions may be structured or unstructured. Responses to structured questions are captured using one of the following response formats:
- Dichotomous response , where respondents are asked to select one of two possible choices, such as true/false, yes/no, or agree/disagree. An example of such a question is: Do you think that the death penalty is justified under some circumstances (circle one): yes / no.
- Nominal response , where respondents are presented with more than two unordered options, such as: What is your industry of employment: manufacturing / consumer services / retail / education / healthcare / tourism & hospitality / other.
- Ordinal response , where respondents have more than two ordered options, such as: what is your highest level of education: high school / college degree / graduate studies.
- Interval-level response , where respondents are presented with a 5-point or 7-point Likert scale, semantic differential scale, or Guttman scale. Each of these scale types were discussed in a previous chapter.
- Continuous response , where respondents enter a continuous (ratio-scaled) value with a meaningful zero point, such as their age or tenure in a firm. These responses generally tend to be of the fill-in-the blanks type.
Question content and wording. Responses obtained in survey research are very sensitive to the types of questions asked. Poorly framed or ambiguous questions will likely result in meaningless responses with very little value. Dillman (1978) recommends several rules for creating good survey questions. Every single question in a survey should be carefully scrutinized for the following issues:
- Is the question clear and understandable: Survey questions should be stated in a very simple language, preferably in active voice, and without complicated words or jargon that may not be understood by a typical respondent. All questions in the questionnaire should be worded in a similar manner to make it easy for respondents to read and understand them. The only exception is if your survey is targeted at a specialized group of respondents, such as doctors, lawyers and researchers, who use such jargon in their everyday environment.
- Is the question worded in a negative manner: Negatively worded questions, such as should your local government not raise taxes, tend to confuse many responses and lead to inaccurate responses. Such questions should be avoided, and in all cases, avoid double-negatives.
- Is the question ambiguous: Survey questions should not words or expressions that may be interpreted differently by different respondents (e.g., words like “any” or “just”). For instance, if you ask a respondent, what is your annual income, it is unclear whether you referring to salary/wages, or also dividend, rental, and other income, whether you referring to personal income, family income (including spouse’s wages), or personal and business income? Different interpretation by different respondents will lead to incomparable responses that cannot be interpreted correctly.
- Does the question have biased or value-laden words: Bias refers to any property of a question that encourages subjects to answer in a certain way. Kenneth Rasinky (1989) examined several studies on people’s attitude toward government spending, and observed that respondents tend to indicate stronger support for “assistance to the poor” and less for “welfare”, even though both terms had the same meaning. In this study, more support was also observed for “halting rising crime rate” (and less for “law enforcement”), “solving problems of big cities” (and less for “assistance to big cities”), and “dealing with drug addiction” (and less for “drug rehabilitation”). A biased language or tone tends to skew observed responses. It is often difficult to anticipate in advance the biasing wording, but to the greatest extent possible, survey questions should be carefully scrutinized to avoid biased language.
- Is the question double-barreled: Double-barreled questions are those that can have multiple answers. For example, are you satisfied with the hardware and software provided for your work? In this example, how should a respondent answer if he/she is satisfied with the hardware but not with the software or vice versa? It is always advisable to separate double-barreled questions into separate questions: (1) are you satisfied with the hardware provided for your work, and (2) are you satisfied with the software provided for your work. Another example: does your family favor public television? Some people may favor public TV for themselves, but favor certain cable TV programs such as Sesame Street for their children.
- Is the question too general: Sometimes, questions that are too general may not accurately convey respondents’ perceptions. If you asked someone how they liked a certain book and provide a response scale ranging from “not at all” to “extremely well”, if that person selected “extremely well”, what does he/she mean? Instead, ask more specific behavioral questions, such as will you recommend this book to others, or do you plan to read other books by the same author? Likewise, instead of asking how big is your firm (which may be interpreted differently by respondents), ask how many people work for your firm, and/or what is the annual revenues of your firm, which are both measures of firm size.
- Is the question too detailed: Avoid unnecessarily detailed questions that serve no specific research purpose. For instance, do you need the age of each child in a household or is just the number of children in the household acceptable? However, if unsure, it is better to err on the side of details than generality.
- Is the question presumptuous: If you ask, what do you see are the benefits of a tax cut, you are presuming that the respondent sees the tax cut as beneficial. But many people may not view tax cuts as being beneficial, because tax cuts generally lead to lesser funding for public schools, larger class sizes, and fewer public services such as police, ambulance, and fire service. Avoid questions with built-in presumptions.
- Is the question imaginary: A popular question in many television game shows is “if you won a million dollars on this show, how will you plan to spend it?” Most respondents have never been faced with such an amount of money and have never thought about it (most don’t even know that after taxes, they will get only about $640,000 or so in the United States, and in many cases, that amount is spread over a 20-year period, so that their net present value is even less), and so their answers tend to be quite random, such as take a tour around the world, buy a restaurant or bar, spend on education, save for retirement, help parents or children, or have a lavish wedding. Imaginary questions have imaginary answers, which cannot be used for making scientific inferences.
- Do respondents have the information needed to correctly answer the question: Often times, we assume that subjects have the necessary information to answer a question, when in reality, they do not. Even if a response is obtained, in such case, the responses tend to be inaccurate, given their lack of knowledge about the question being asked. For instance, we should not ask the CEO of a company about day-to-day operational details that they may not be aware of, or asking teachers about how much their students are learning, or asking high-schoolers “Do you think the US Government acted appropriately in the Bay of Pigs crisis?”
Question sequencing. In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow from the least sensitive to the most sensitive, from the factual and behavioral to the attitudinal, and from the more general to the more specific. Some general rules for question sequencing:
- Start with easy non-threatening questions that can be easily recalled. Good options are demographics (age, gender, education level) for individual-level surveys and firmographics (employee count, annual revenues, industry) for firm-level surveys.
- Never start with an open ended question.
- If following an historical sequence of events, follow a chronological order from earliest to latest.
- Ask about one topic at a time. When switching topics, use a transition, such as “The next section examines your opinions about …”
- Use filter or contingency questions as needed, such as: “If you answered “yes” to question 5, please proceed to Section 2. If you answered “no” go to Section 3.”
Other golden rules . Do unto your respondents what you would have them do unto you. Be attentive and appreciative of respondents’ time, attention, trust, and confidentiality of personal information. Always practice the following strategies for all survey research:
- People’s time is valuable. Be respectful of their time. Keep your survey as short as possible and limit it to what is absolutely necessary. Respondents do not like spending more than 10-15 minutes on any survey, no matter how important it is. Longer surveys tend to dramatically lower response rates.
- Always assure respondents about the confidentiality of their responses, and how you will use their data (e.g., for academic research) and how the results will be reported (usually, in the aggregate).
- For organizational surveys, assure respondents that you will send them a copy of the final results, and make sure that you follow up with your promise.
- Thank your respondents for their participation in your study.
- Finally, always pretest your questionnaire, at least using a convenience sample, before administering it to respondents in a field setting. Such pretesting may uncover ambiguity, lack of clarity, or biases in question wording, which should be eliminated before administering to the intended sample.
Interview Survey
Interviews are a more personalized form of data collection method than questionnaires, and are conducted by trained interviewers using the same research protocol as questionnaire surveys (i.e., a standardized set of questions). However, unlike a questionnaire, the interview script may contain special instructions for the interviewer that is not seen by respondents, and may include space for the interviewer to record personal observations and comments. In addition, unlike mail surveys, the interviewer has the opportunity to clarify any issues raised by the respondent or ask probing or follow-up questions. However, interviews are time-consuming and resource-intensive. Special interviewing skills are needed on part of the interviewer. The interviewer is also considered to be part of the measurement instrument, and must proactively strive not to artificially bias the observed responses.
The most typical form of interview is personal or face-to-face interview , where the interviewer works directly with the respondent to ask questions and record their responses.
Personal interviews may be conducted at the respondent’s home or office location. This approach may even be favored by some respondents, while others may feel uncomfortable in allowing a stranger in their homes. However, skilled interviewers can persuade respondents to cooperate, dramatically improving response rates.
A variation of the personal interview is a group interview, also called focus group . In this technique, a small group of respondents (usually 6-10 respondents) are interviewed together in a common location. The interviewer is essentially a facilitator whose job is to lead the discussion, and ensure that every person has an opportunity to respond. Focus groups allow deeper examination of complex issues than other forms of survey research, because when people hear others talk, it often triggers responses or ideas that they did not think about before. However, focus group discussion may be dominated by a dominant personality, and some individuals may be reluctant to voice their opinions in front of their peers or superiors, especially while dealing with a sensitive issue such as employee underperformance or office politics. Because of their small sample size, focus groups are usually used for exploratory research rather than descriptive or explanatory research.
A third type of interview survey is telephone interviews . In this technique, interviewers contact potential respondents over the phone, typically based on a random selection of people from a telephone directory, to ask a standard set of survey questions. A more recent and technologically advanced approach is computer-assisted telephone interviewing (CATI), increasing being used by academic, government, and commercial survey researchers, where the interviewer is a telephone operator, who is guided through the interview process by a computer program displaying instructions and questions to be asked on a computer screen. The system also selects respondents randomly using a random digit dialing technique, and records responses using voice capture technology. Once respondents are on the phone, higher response rates can be obtained. This technique is not ideal for rural areas where telephone density is low, and also cannot be used for communicating non-audio information such as graphics or product demonstrations.
Role of interviewer. The interviewer has a complex and multi-faceted role in the interview process, which includes the following tasks:
- Prepare for the interview: Since the interviewer is in the forefront of the data collection effort, the quality of data collected depends heavily on how well the interviewer is trained to do the job. The interviewer must be trained in the interview process and the survey method, and also be familiar with the purpose of the study, how responses will be stored and used, and sources of interviewer bias. He/she should also rehearse and time the interview prior to the formal study.
- Locate and enlist the cooperation of respondents: Particularly in personal, in-home surveys, the interviewer must locate specific addresses, and work around respondents’ schedule sometimes at undesirable times such as during weekends. They should also be like a salesperson, selling the idea of participating in the study.
- Motivate respondents: Respondents often feed off the motivation of the interviewer. If the interviewer is disinterested or inattentive, respondents won’t be motivated to provide useful or informative responses either. The interviewer must demonstrate enthusiasm about the study, communicate the importance of the research to respondents, and be attentive to respondents’ needs throughout the interview.
- Clarify any confusion or concerns: Interviewers must be able to think on their feet and address unanticipated concerns or objections raised by respondents to the respondents’ satisfaction. Additionally, they should ask probing questions as necessary even if such questions are not in the script.
- Observe quality of response: The interviewer is in the best position to judge the quality of information collected, and may supplement responses obtained using personal observations of gestures or body language as appropriate.
Conducting the interview. Before the interview, the interviewer should prepare a kit to carry to the interview session, consisting of a cover letter from the principal investigator or sponsor, adequate copies of the survey instrument, photo identification, and a telephone number for respondents to call to verify the interviewer’s authenticity. The interviewer should also try to call respondents ahead of time to set up an appointment if possible. To start the interview, he/she should speak in an imperative and confident tone, such as “I’d like to take a few minutes of your time to interview you for a very important study,” instead of “May I come in to do an interview?” He/she should introduce himself/herself, present personal credentials, explain the purpose of the study in 1-2 sentences, and assure confidentiality of respondents’ comments and voluntariness of their participation, all in less than a minute. No big words or jargon should be used, and no details should be provided unless specifically requested. If the interviewer wishes to tape-record the interview, he/she should ask for respondent’s explicit permission before doing so. Even if the interview is recorded, the interview must take notes on key issues, probes, or verbatim phrases.
During the interview, the interviewer should follow the questionnaire script and ask questions exactly as written, and not change the words to make the question sound friendlier. They should also not change the order of questions or skip any question that may have been answered earlier. Any issues with the questions should be discussed during rehearsal prior to the actual interview sessions. The interviewer should not finish the respondent’s sentences. If the respondent gives a brief cursory answer, the interviewer should probe the respondent to elicit a more thoughtful, thorough response. Some useful probing techniques are:
- The silent probe: Just pausing and waiting (without going into the next question) may suggest to respondents that the interviewer is waiting for more detailed response.
- Overt encouragement: Occasional “uh-huh” or “okay” may encourage the respondent to go into greater details. However, the interviewer must not express approval or disapproval of what was said by the respondent.
- Ask for elaboration: Such as “can you elaborate on that?” or “A minute ago, you were talking about an experience you had in high school. Can you tell me more about that?”
- Reflection: The interviewer can try the psychotherapist’s trick of repeating what the respondent said. For instance, “What I’m hearing is that you found that experience very traumatic” and then pause and wait for the respondent to elaborate.
After the interview in completed, the interviewer should thank respondents for their time, tell them when to expect the results, and not leave hastily. Immediately after leaving, they should write down any notes or key observations that may help interpret the respondent’s comments better.
Biases in Survey Research
Despite all of its strengths and advantages, survey research is often tainted with systematic biases that may invalidate some of the inferences derived from such surveys. Five such biases are the non-response bias, sampling bias, social desirability bias, recall bias, and common method bias.
Non-response bias. Survey research is generally notorious for its low response rates. A response rate of 15-20% is typical in a mail survey, even after two or three reminders. If the majority of the targeted respondents fail to respond to a survey, then a legitimate concern is whether non-respondents are not responding due to a systematic reason, which may raise questions about the validity of the study’s results. For instance, dissatisfied customers tend to be more vocal about their experience than satisfied customers, and are therefore more likely to respond to questionnaire surveys or interview requests than satisfied customers. Hence, any respondent sample is likely to have a higher proportion of dissatisfied customers than the underlying population from which it is drawn. In this instance, not only will the results lack generalizability, but the observed outcomes may also be an artifact of the biased sample. Several strategies may be employed to improve response rates:
- Advance notification: A short letter sent in advance to the targeted respondents soliciting their participation in an upcoming survey can prepare them in advance and improve their propensity to respond. The letter should state the purpose and importance of the study, mode of data collection (e.g., via a phone call, a survey form in the mail, etc.), and appreciation for their cooperation. A variation of this technique may request the respondent to return a postage-paid postcard indicating whether or not they are willing to participate in the study.
- Relevance of content: If a survey examines issues of relevance or importance to respondents, then they are more likely to respond than to surveys that don’t matter to them.
- Respondent-friendly questionnaire: Shorter survey questionnaires tend to elicit higher response rates than longer questionnaires. Furthermore, questions that are clear, non-offensive, and easy to respond tend to attract higher response rates.
- Endorsement: For organizational surveys, it helps to gain endorsement from a senior executive attesting to the importance of the study to the organization. Such endorsement can be in the form of a cover letter or a letter of introduction, which can improve the researcher’s credibility in the eyes of the respondents.
- Follow-up requests: Multiple follow-up requests may coax some non-respondents to respond, even if their responses are late.
- Interviewer training: Response rates for interviews can be improved with skilled interviewers trained on how to request interviews, use computerized dialing techniques to identify potential respondents, and schedule callbacks for respondents who could not be reached.
- Incentives : Response rates, at least with certain populations, may increase with the use of incentives in the form of cash or gift cards, giveaways such as pens or stress balls, entry into a lottery, draw or contest, discount coupons, promise of contribution to charity, and so forth.
- Non-monetary incentives: Businesses, in particular, are more prone to respond to non-monetary incentives than financial incentives. An example of such a non-monetary incentive is a benchmarking report comparing the business’s individual response against the aggregate of all responses to a survey.
- Confidentiality and privacy: Finally, assurances that respondents’ private data or responses will not fall into the hands of any third party, may help improve response rates.
Sampling bias. Telephone surveys conducted by calling a random sample of publicly available telephone numbers will systematically exclude people with unlisted telephone numbers, mobile phone numbers, and people who are unable to answer the phone (for instance, they are at work) when the survey is being conducted, and will include a disproportionate number of respondents who have land-line telephone service with listed phone numbers and people who stay home during much of the day, such as the unemployed, the disabled, and the elderly. Likewise, online surveys tend to include a disproportionate number of students and younger people who are constantly on the Internet, and systematically exclude people with limited or no access to computers or the Internet, such as the poor and the elderly. Similarly, questionnaire surveys tend to exclude children and the illiterate, who are unable to read, understand, or meaningfully respond to the questionnaire. A different kind of sampling bias relate to sampling the wrong population, such as asking teachers (or parents) about academic learning of their students (or children), or asking CEOs about operational details in their company. Such biases make the respondent sample unrepresentative of the intended population and hurt generalizability claims about inferences drawn from the biased sample.
Social desirability bias . Many respondents tend to avoid negative opinions or embarrassing comments about themselves, their employers, family, or friends. With negative questions such as do you think that your project team is dysfunctional, is there a lot of office politics in your workplace, or have you ever illegally downloaded music files from the Internet, the researcher may not get truthful responses. This tendency among respondents to “spin the truth” in order to portray themselves in a socially desirable manner is called the “social desirability bias”, which hurts the validity of response obtained from survey research. There is practically no way of overcoming the social desirability bias in a questionnaire survey, but in an interview setting, an astute interviewer may be able to spot inconsistent answers and ask probing questions or use personal observations to supplement respondents’ comments.
Recall bias. Responses to survey questions often depend on subjects’ motivation, memory, and ability to respond. Particularly when dealing with events that happened in the distant past, respondents may not adequately remember their own motivations or behaviors or perhaps their memory of such events may have evolved with time and no longer retrievable. For instance, if a respondent to asked to describe his/her utilization of computer technology one year ago or even memorable childhood events like birthdays, their response may not be accurate due to difficulties with recall. One possible way of overcoming the recall bias is by anchoring respondent’s memory in specific events as they happened, rather than asking them to recall their perceptions and motivations from memory.
Common method bias. Common method bias refers to the amount of spurious covariance shared between independent and dependent variables that are measured at the same point in time, such as in a cross-sectional survey, using the same instrument, such as a questionnaire. In such cases, the phenomenon under investigation may not be adequately separated from measurement artifacts. Standard statistical tests are available to test for common method bias, such as Harmon’s single-factor test (Podsakoff et al. 2003), Lindell and Whitney’s (2001) market variable technique, and so forth. This bias can be potentially avoided if the independent and dependent variables are measured at different points in time, using a longitudinal survey design, of if these variables are measured using different methods, such as computerized recording of dependent variable versus questionnaire-based self-rating of independent variables.