Many low-budget researchers automatically think of surveys and questionnaires as their only research alternatives. It is important to become aware of and appreciate the potential of other techniques before moving on to survey design.
Chapter Seven focused on one such possibility: experimentation. Experiments are relatively easy to do and, because they can determine cause and effect, usually give managers direct insight into what works and what doesn’t. Thus, experiments are both cheap and usually highly practical. A researcher with a limited budget has little freedom to conduct research that is not practical.
Chapters Five and Six discussed two other alternatives to surveys: archives and observation. These techniques are sometimes less immediately practical since they do not easily lend themselves to assessing cause and effect. They still have the prime virtue of being very inexpensive. Despite inadequacies, they can often help give managers the small edge in decision making that can help them consistently outperform their competitors.
A major reason for considering these alternatives is that surveys are both costly and difficult to do well. It is not easy to design sensible samples and well-worded questionnaires. Unfortunately, this does not keep amateurs from barging ahead to get something into the field without proper attention to making the research valid. This book is devoted to good low-cost research, not just low-cost research.
Surveys are difficult for two major reasons. First, you must typically draw a representative sample, and this is not always an easy goal to accomplish. Second, you must ask people questions. There are a number of reasons that this can be a major source of problems. Asking questions is always intrusive. This means your subjects know they are being studied, and because they know they are being studied, they will usually speculate about why you are asking. This can have one of two effects. One is that it makes people suspicious of your motives, and they may decide either not to participate at all (thus fouling up a carefully designed sampling plan) or to withhold or distort important information. A friend always tells researchers studying fast food preferences that he is a heavy, heavy consumer (which he isn’t) and that he really would like to be offered healthier ingredients in his burgers, chicken nuggets, and tacos. He always answers this way because he wants to encourage fast-food marketers to offer healthier fare to their real heavy consumers.
The second effect of the awareness of being studied is that some respondents try to please the researcher. Well-meaning respondents may try to be helpful and claim they have a very favorable opinion of whatever it is you are asking about. Or they may try to guess which behaviors you are interested in and try to slant their preferences that way.
Asking questions always involves respondents and their egos. Whether we are aware of it or not, we all attempt to influence the way in which others perceive us. We do this with our dress, our choice of our furniture, the way we speak, whom we associate with, and so on. It is therefore inevitable that when we tell researchers things about ourselves, we may answer subtly or directly in ways that will enhance our self-image. Reader’s Digest fans will say instead that they only read the New York Review of Books or the Atlantic Monthly. The soap opera lover will swear to a preference for public television. The heavy beer drinker will develop a sudden taste for wine.
Asking questions always involves language, and words are slippery things. A favorite word in studies of retail and service outlets is convenience as in the question, “Do you find YMCA #2 more or less convenient than YMCA #1?” What the researcher means by convenient may be a lot different from what the typical respondent means. For example, the researcher may assume that respondents are being asked how close the location is to their home or normal commuting routes, whereas some respondents may think the question is asking how easy it is to park and get in and out of the building. Others may think it is referring to the particular YMCA’s hours of operation. In such cases, differences across respondents may simply reflect differences in how they interpret the question. For example, executives may find YMCA #1 more convenient than YMCA #2, while homebodies may indicate the reverse. Executives interpret your question to mean ease of parking, and homebodies may interpret it to mean closeness to home. The difference in the results is merely an artifact of the language.
All of these types of problems are unavoidable in survey research. Questions will always crop up as to the validity of the study. You can try hard to minimize their effects, but the nasty problem is that you usually never completely know that you have eliminated potential biases or even whether you understand them. This is particularly the case if you try to save money in the design by not pretesting the questionnaire thoroughly or training interviewers carefully. Management needs valid research results on which to act. In the YMCA example, it would be a mistake to promote the nearness of your outlet to executives as well as to homebodies. The two are very different in their goals and interests.
Conducting surveys and asking questions is often essential to a specific management problem. Researchers must use these techniques in hundreds of situations. If, for example, you want data on attitudes, you have no other recourse. But when surveys are not essential, researchers should exhaust other nonintrusive methods before going forward with a full-blown survey study.
If asking questions is the best approach to helping management make a decision, three basic strategic design decisions must be made:
- how to ask questions
- what questions to ask, and
- who should answer.
Methods of Asking Questions
Modern technology offers researchers a wide array of approaches to asking questions. They vary mainly in the extent to which the respondent and the question asker interact. To decide on an approach, the researcher should ask a number of basic questions:
- Should the respondent be given a significant amount of time and the opportunity to talk with others before answering?
- Can the answers to the questions be reduced to a few simple choices?
- Do the questions need to be explained to the respondent?
- Is it likely that the respondent’s answers will often have to be probed or clarified?
- Does anything have to be shown to the respondent, such as an advertisement or a package or a set of attitude scales to be filled in?
- Is it likely that many potential respondents will be unmotivated or turned off by the questions or the issue without personal encouragement by an interviewer?
The answers to these questions and several others, including those relating to the availability of personnel and financial resources to carry out fieldwork, will determine the basic approach. In the majority of cases, the alternatives likely to be considered will be mail, Internet, telephone, and face-to-face interviews. However, there are other possibilities. For example, respondents can be interviewed in small groups, as in focus group studies. Or individuals can be queried in a shopping mall by a computer video screen or over the telephone by a prerecorded voice with pauses for answers. In some circumstances, a combination of methods may be the best approach. One could ask questions in person that require face-to-face contact and then leave behind a mail-back questionnaire on which respondents can record further details, such as purchase histories, personal preferences, and socioeconomic characteristics.
To my continuing dismay, when researchers with low budgets think of surveys and asking questions, they inevitably think first of doing a mail study. Many think this is the only alternative possible given their meager resources.
You can see why they might think so. It is often not difficult to get a mailing list. There are many such lists for sale and many list suppliers that will provide labels for a mail sample and even do the mailing at a modest cost. Even without such a commercial list, many researchers feel they can always develop a list themselves using their own database, the Yellow Pages, or the regular white pages telephone directory. A photocopier is usually close at hand, or a print shop can run off hundreds of questionnaire forms at low cost. Staff at the office can stuff envelopes at little or no out-of-pocket cost. The office postage meter and bulk mail rates can be used to keep the outbound cost of mailed questionnaires low. Business reply envelopes can be used for the returned responses, which means paying postage only for questionnaires that are in fact returned.
Given these low costs, the neophyte researcher says, “Why not?” and sits down and designs a questionnaire and a compelling cover letter and sends out five thousand forms in the mail. If it is an average study sent to a randomly drawn but moderately interested audience, the researcher will be lucky to get 10 to 15 percent of the questionnaires back. (If the audience is not interested, 4 to 5 percent would be good.) Nevertheless, this can mean 500 to 750 responses returned. The researcher thinks, “Certainly this is a substantial basis on which to make statements about a market. Don’t most CNN polls reporting the president’s popularity ratings base these on only nine hundred to eleven hundred respondents?”
The major problem with mail questionnaire studies, however, is not the respondents: it is the nonrespondents. Professional pollsters who need results that can be projected to the entire population with a known level of error are very careful to develop probability samples and then attempt to interview everyone in their samples. Because they spend a great deal of money and use face-to-face or telephone interviewing techniques, they usually have high rates of cooperation. When they do not contact a respondent or are refused participation, they make very careful analyses of just who did not respond and either adjust their analyses accordingly or alert users of the results to potential problems through a “limitations” section in their report. With the typical cheap but dirty mail study with a low response rate, the nature of the nonresponse bias is usually unknown.
Nonresponse is not necessarily bad. If the nonrespondents would have answered the questions in the study in the same way as the respondents, then there would be no bias. Nonresponse can occur in telephone or face-to-face interview studies when a respondent refuses to cooperate or is out of town or busy for the moment or because the telephone number or address was wrong. Assuming that personal interviews were scheduled at random times on weekdays and weekends and there are several follow-up attempts, nonresponse is more likely to be a chance occurrence where one should not expect nonresponders to be greatly different from responders.
In mail studies, this is virtually never the case. Those who get something in the mail asking them to fill in a modest (or great) number of answers will inevitably feel imposed on. The same is often true with telephone interviews. In a face-to-face interview or telephone study, the personality and skills of a highly trained interviewer can often overcome these negative initial reactions. But at home at one’s desk or at the kitchen table, it is very easy for a potential respondent to throw out a mailed questionnaire. Those who do not react this way are likely to be different in one of two important ways. One group of responders will have an inherent interest in the topic of the study. The second group will want to help the researcher out either because they are a helping kind of person or because they are for the moment swayed by the convincing argument in the study’s cover letter. Those who are interested in the topic are likely to be further divisible into two additional types: those who are positively excited about the topic (for example, those who buy the product or use the service) and those negatively disposed toward the topic who see the study as a grand opportunity to get a few gripes off their chest. As an additional biasing factor, it has generally been found that the higher the education level, the higher the response rate.
What, then, does this imply about the 10 to 15 percent who do respond to the typical mail study? It means they are almost always not at all like the nonrespondents. Unfortunately, too many researchers do not attempt to assess these potential differences because they do not recognize the problem, do not know how to handle it, or feel they do not have the time or budget to do so.
Not too long ago, I was asked by the publisher of a series of magazines aimed at different types of retail outlets to give a speech discussing the value of their annual statistical profiles of each retail category. On reviewing the publisher’s methodology, I discovered that the statistics were based on responses to a single mailing from their magazine publishers (with no follow-up) and that no checking had ever been made of those who didn’t respond. Yet each magazine was publishing its study as a profile of its industry, reporting such data as average store sales, widths of product lines, numbers of employees, various expenses, and profits broken down by various outlet categories.
But what, I asked, did their profiles really describe? Their response rate was usually 50 percent, which is quite good and not uncommon when mailing to a highly interested group. However, they did not know who really did respond. Although I was not concerned with the response rate, I was concerned with possible biases at both ends of the distribution of outlet size. Did the largest outlets participate? They may not have, feeling either that they would be exposing too much internal information that could be identified with them or that they were already part of a large organization that had a great deal of its own internal archival data and so they didn’t need to contribute to a magazine’s profile. At the other extreme, smaller or marginal outlets may not have participated because they were new to the industry, lacked appreciation of the value of such cooperation, or perhaps were embarrassed by their small size or poor performance. If either of these groups was underrepresented, the profiles have very limited value. The magazines apparently shut their eyes to these problems, never investigating who did and did not respond.
As a consequence of this and a great many similar experiences, I am very reluctant to encourage the use of mail questionnaires for low-cost research.
Some steps can increase response rates, and we will note some of these. There are also ways to investigate nonresponse bias, although usually at considerable cost. For example, you can conduct telephone interviews of a small sample of nonrespondents to a mail survey if they can be identified. Alternatively, characteristics of respondents can be compared to census data or, in a business study, to government business censuses. If there is a close match between the sample characteristics and the universe from which it was supposedly drawn, researchers can be encouraged that their sample may be representative. However, such close matching does not prove that the results are valid.
Even when nonresponse rates are satisfactory or understood, mail studies have a great many other, and often fatal, flaws. For example, it is possible that someone other than the intended respondent may fill out the questionnaire. A wife may ask her husband to respond, or vice versa. An executive may delegate the task to a subordinate. A doctor may ask a nurse to give the necessary particulars. Each of these situations may seriously distort the findings.
Also, there are unlimited chances to improve answers before they are returned. For example, suppose a respondent is asked for “top-of-the-mind” awareness of various cancer organizations early in the mail questionnaire and then later recalls additional organizations. He or she may go back and add the newly remembered items. On the other hand, by the end of the questionnaire, he or she may have discovered the study’s sponsor and go back to change certain answers to conform more to what is presumed to be the sponsor’s objectives.
Despite these problems, there are still three situations in which mail questionnaires should be the preferred choice:
First, in some situations, the respondent will need time to gather information to report in the questionnaire. This might involve consulting with someone else. For example, if the researcher wished to know whether anyone in the household had bought a product, used a service, or adopted a new behavior, then time would have to be allowed for consultation with others. This usually would not be practical (or sometimes even possible, given people’s schedules) in a telephone or face-to-face interview situation. A more common example is where records must be looked up, such as when organization executives are asked for performance details or when households are asked to report on last year’s taxes or asked what safety equipment they currently have in their household inventories.
Second, in some situations, it is desirable to give the respondent time to come up with a well-considered answer. Many years ago, I participated in what is called a Delphi study. This is a technique where respondents (usually a small select group) are asked to make judgments about the future or about some existing phenomena. The answers of the group are then summarized by the researcher and fed back to the original respondents, who are then asked to revise their answers if they wish.
The study in which I participated was an attempt to forecast time usage (for example, how much leisure or work time we would have and how we would use it) twenty-five years in the future. The study went on over three rounds of feedback, and at each stage, respondents needed several hours, if not a day or two, to give the researcher our carefully considered opinions. This could be achieved only by a mail study.
Third, a mail questionnaire is probably the only form that many busy respondents would answer. If the survey has a great many questions, those surveyed may be willing only to respond to a written questionnaire claiming they do not have the time to participate in a telephone or face-to-face interview (and if they are a physician or busy executive, they usually expect to be paid).
Even when one or more of these conditions exists, a potential mail study should also meet the following requirements:
- The fact that respondents will have long time in which to fill in the answers is not a problem.
- It is not a serious problem if someone other than the addressee fills in the questionnaire.
- A mailing list (or a procedure for sampling) that is truly representative of the population of interest can be obtained.
- A careful attempt is made to estimate the nature of the no response bias.
- The respondent population is literate and reachable by mail (requirements that may make mail surveys difficult in some developing countries).
- There is a high probability that a large proportion of the respondents will be interested in the topic and respond. Interest is likely to be high where the target population has a tie to the research sponsor—for example, for members of a trade association or a club, patients recently in a hospital, employees in the researcher’s organization, holders of an organization’s credit cards, or subscribers to certain magazines. Even in such situations, it is essential that an estimate of the no response bias be made.
There are two conditions when a biased mail survey can be used. One is when the researcher really does not care about no response. This would be the case whenever project ability is not important. In a great many situations, this is the case—for example, when one wants to learn whether there are any problems with a certain product or service or with the wording of a particular advertising message or product usage instruction. A second reason for using a biased study is when the study is exploratory, seeking a few ideas for an advertisement, testing a questionnaire, or developing a set of hypotheses that will be verified in a later nonbiased study.
If the decision is to go ahead with a mail study, there are a number of very important and basic techniques that should be employed:
- The cover letter asking for assistance should be made as motivating as possible (especially the first couple of sentences). The letter should
- be enthusiastic (if you are not excited about the study, why should the potential respondent?);
- indicate the purposes of the study, if possible showing how it will benefit the respondent (for example, by helping provide better products and services); ensure anonymity (for all respondents or for those who request it); and
- ask for help, pointing out that only selected individuals are being contacted and each answer is important.
- The cover letter and the questionnaire should be attractive and professional with dramatic graphics and color where possible.
- The letter should be addressed to a specific individual.
- If possible, a motivating or prestigious letterhead should be used.
- If the sponsor of the study can be revealed and awareness of it would motivate respondents (for example, UNICEF or a key trade association), the letter should do so.
- The questionnaire should be kept as brief as possible and easy to follow and understand. It should be accompanied by a self-addressed stamped return envelope or a fax number. If the budget permits, one or more follow-up contacts (even by telephone) should be used to increase response rates.
- Giving advance notification to the respondent by telephone (better) or by postcard (worse) that the questionnaire is coming has been useful to increase the total number of responses, the speed of response, and the quality of the responses.
- In some cases, offering gifts, cash, or a chance at a major prize to those who mail back the questionnaire (or including a coin or a dollar bill “for charity”), and using stamped rather than metered return envelopes increases the number, speed, and quality of responses.
Lovelock and his colleagues strongly urge the use of personal drop-off and pick-up of what otherwise would be a mailed questionnaire. They believe this technique is particularly appropriate for lengthy questionnaires where considerable motivational effort and perhaps some explanation need to be carried out. The results of their study indicate that response rates can be as high as 74 percent. While this rate is achieved by incurring the costs of personnel to handle questionnaire delivery and pickup, they can be lightly trained and low cost (students fit this bill). Cost-per-returned-questionnaire was found to be no different from the traditional mail questionnaire. In addition, those delivering the questionnaire can eliminate individuals obviously ineligible for the study (if anyone is), and data about the respondent’s sex, age, living conditions, and neighborhood can be recorded by observation to enrich the database. Reasons for refusals can be elicited, and because the fieldworker can observe and screen the respondents, fieldworkers can provide the researcher with a very good sense of the nature of the no response bias.
This approach requires that the study be done in a concentrated geographical area rather than nationally. The technique is particularly useful in organizational or office studies where respondents are easy to locate but so busy they would not ordinarily agree to a personal interview.
The Internet is another medium for contacting potential informants. Estimates by Nua Internet Surveys indicate that over 400 million people worldwide were on-line in early 2001, with over 40 percent of these in the United States and Canada. Internet surveys allow you to ask complicated series of questions tailored to each individual respondent. The Internet has additional advantages in that respondents can be shown visual and audio Stimuli such as television commercials or print ads and asked their responses. Such research is difficult to design, but there are many organizations that can help design and implement Internet research projects. Among these are Nua Internet Surveys, Active Media Research, and Opti-Market Consulting.
The major caution when conducting such research is deciding whom to contact and then understanding who has responded. Clearly, those now on the Web are not representative of the entire population of a specific country, especially countries in early stages of Internet development. Furthermore, those active on the Internet are demographically skewed toward those who are young, better educated, and upscale economically. Finally, it is not possible to control who responds to an inquiry; the person claiming to be a forty-three year old woman with three children could well be her preteen son.
The Net can be an excellent medium to study a precise population known to be active on the Web, especially one with whom the manager has some sort of contact. For example, a nonprofit can study clients or partners with whom it has regular contact and achieve a reasonably representative profile of this group. In the same way, nonprofits could study donors who make their gifts over the Web.
Perhaps the best use of the Web is in formative research designed to get impressions, insights, and other guidance for managerial decisions. Where the manager is interested in getting help with directions to go in or for ideas to pursue with more formal research later, respondents on the Web can be very helpful.
Telephone interviewing with or without computer assistance is now the method of choice in most developed countries when a researcher needs to interact with respondents and achieve a projectable sample. A major reason is the lower cost. A large number of interviews all over the country or even internationally can be conducted from a telephone bank at one site in a very short period of time. Telephone interviewing also has some advantages that alternative techniques do not:
- The person-to-person contact yields all the advantages of motivating responses, explaining questions, and clarifying answers in face-to-face interviews. However, the number of cues that the interviewer as a person presents to the respondent are very few (sex, voice, vocabulary), yielding fewer chances for biasing effects than if the respondent also could react to the interviewer’s clothes, facial expressions, body language, and the like.
- The facelessness of the telephone interviewing situation can loosen the inhibitions of respondents who might withhold a personally embarrassing piece of information if a flesh-and-blood person was standing or sitting opposite them taking down information.
- Appointments and multiple callbacks permit interviewers to find precisely defined individuals (for example, high-income potential donors).
- Because telephone interviews are conducted from one location, a supervisor can easily monitor the work of interviewees through systematic eavesdropping, thus greatly increasing quality control and uniformity of techniques.
Despite these advantages, telephone interviewing has three major drawbacks. First, it is harder to motivate and establish rapport with potential respondents over the telephone than in person; it is easier for them to hang up the telephone than to close the door on an interviewer. This problem is worsening in the age of telemarketing as more unscrupulous sales organizations use the pseudo–telephone interview as a technique for generating sales leads.
Second, some things cannot be done in a telephone interview that could be done in person. No visual props can be used (one can still test radio commercials, though), and certain sophisticated measurements, such as scales with cue cards, cannot be used.
The third drawback is that increasing numbers of telephones are not publicly listed, especially in major cities, and many households have multiple numbers for their kids and their home-based businesses. For these reasons, the telephone directory is not a good sampling frame for interviewing in most major centers. Telephone books, which typically are published yearly, tend to exclude three kinds of households: phone less households, unlisted households, and not-yet-listed households.
Over 20 percent of all owners of residential lines choose not to be listed. In larger cities like Chicago, Los Angeles, and New York, this figure can be as high as 40 percent or more. These households are more likely to represent younger heads, single females, those with middle rather than high or low incomes, nonwhites, and those in blue-collar or sales and service occupations. Not-yet-listed households include those moving into an area or out of their parents’ house since the last directory was published. These may be crucial targets for many marketing efforts, such as those aimed at immigrants, recently unemployed, or college graduates.
For these reasons, those planning to do telephone sampling are encouraged to use some variation on random digit or plus-one dialing. These approaches involve first randomly selecting listings from the existing directory to ensure that the researcher is using blocks of numbers the telephone company has released. At this point, the last one or two digits in the listing number are replaced with other randomly selected numbers. Alternatively, the researcher could simply add one to the last digit in the selected number. These approaches still omit blocks of numbers released since the previous directory was published, but this should be a nominal bias (mostly affecting new movers) when compared with that of using only numbers in the directory itself.
The random digit dialing approach has its drawbacks. Most important, the technique increases costs because often the randomly selected numbers turn out to be ineligible institutions or fax lines. (Indeed, in one unusual case, an interviewer randomly called a radio call-in show and, to the amusement of both the audience and the interviewer’s colleagues and supervisor, proceeded to interview the host on the air.) However, this cost is one that most researchers are usually willing to bear.
After mail surveys, researchers planning original fieldwork consider face-to-face or telephone interviews. For a great many purposes, face-to-face interviews are the ideal medium for a study for several reasons. For example, respondents (once con- tasted) can be strongly motivated to cooperate. By choosing interviewers carefully, matching them as closely as possible to predicted respondent characteristics, and giving them a motivating sales pitch, the refusal rate can be kept very low. This can be critical, for example, when studying small samples of high-status individuals who must be included in the study and will participate only if a high-status interviewer personally comes to them and asks questions. Also, stimuli can be shown to or handled by respondents in particular sequences. This is very important where one wishes to assess responses to promotion materials such as advertisements or brochures or where a product or package needs to be tasted, sniffed, or otherwise inspected.
As in telephone interviews, there are opportunities for feedback from the respondent that are impossible in self-report, mail, or computer methods. This permits elaboration of misunderstood questions and probing to clarify or extend answers.
The interviewer can add observation measures to traditional questioning. For example, if a set of advertisements is shown, the interviewer can informally estimate the amount of time the respondent spends on each stimulus. Facial expressions can be observed to indicate which parts of the messages are confusing to respondents and need to be refined. Respondents’ appearance and the characteristics of their home or office can be observed and recorded. Sex, race, estimated age, and social status can be recorded without directly asking respondents (although some measures, like social class, should be verified by alternative means).
Against these considerable advantages are some important, often fatal, disadvantages. For example, because the interviewer is physically present during the answering process, two sorts of biases are very likely. First, the interviewer’s traits—dress, manner, physical appearance—may influence the respondent. The interviewer may be perceived as upper lass or snooty, or slovenly or unintelligent, which may cause the interviewee to adjust, sometimes unconsciously, his or her answers. Second, simply because there is another person present, the respondent may distort answers by trying to impress the interviewer or hide something in embarrassment.
In addition, the fact that interviews are conducted away from central offices means that the performance of interviewers is very difficult to monitor for biases and errors (as compared to central office telephone interviewing). Also, physically approaching respondents can be very costly when there are refusals or not-at-homes. Callbacks that are easy by telephone can be very costly in face-to face interviews.
Finally, delays between interviews are great. Telephone lines or the mail can reach anywhere in the country (and most parts of the world) very quickly and with relatively little wasted time. Travel between personal interviews can be costly.
The last two factors tend to make the field staffing costs for face-to-face interviews in most developed countries very high. They are so high that one-on-one personal interviews are probably prohibitively expensive for the low-budget researcher except where the technique’s advantages are critical to the study (for example, graphic materials must be shown or handled or high-status subjects, such as doctors or lawyers, must be interviewed), the sample size is very small, or the study is local. Situations in which these criteria would be met include studies in local neighborhoods, malls, offices, or trade shows.
In the developing world, personal interviews are still relatively low cost and often the only way to reach target audiences. Furthermore, potential respondents often have limited education and reading and writing skills and thus may need help in answering even relatively simple questions (such as rating scales). If the decision is to go ahead with personal interviewing, there are a number of low-cost sampling techniques to consider.