National Academies Press: OpenBook

Use of Market Research Panels in Transit (2013)

Chapter: Chapter Two - Literature Review

« Previous: Chapter One - Introduction
Page 8
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 8
Page 9
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 9
Page 10
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 10
Page 11
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 11
Page 12
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 12
Page 13
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 13
Page 14
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 14
Page 15
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 15
Page 16
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 16
Page 17
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 17
Page 18
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 18
Page 19
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 19
Page 20
Suggested Citation:"Chapter Two - Literature Review ." National Academies of Sciences, Engineering, and Medicine. 2013. Use of Market Research Panels in Transit. Washington, DC: The National Academies Press. doi: 10.17226/22563.
×
Page 20

Below is the uncorrected machine-read text of this chapter, intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text of each book. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages.

8 accommodate new technologies, such as telephone sampling and computerization. Since Frankel and Frankel wrote their article in 1987, the Internet has become another technologi- cal advance, requiring innovations in sampling to accommo- date the technology. The Brick article notes that 10 years ago, no generally accepted method of sampling from the Internet had been established, and that as of the writing of the article in 2011, it was still the case. As developed by Neyman in 1934, probability survey sam- pling became the basis for virtually all sampling theory, with a very specific framework for making inferences. The frame- work assumes that all units in the population can be identified, that every unit has a probability of being selected, and that the probability can be computed for each unit. Once the population is selected, the framework assumes that all char- acteristics can be accurately measured. Non-response fac- tors and the inability to include portions of the population (coverage error) violate the pure assumptions of probability sampling. A variety of techniques has been developed to adjust for non-response and coverage error, such as model- based and design-based sampling methods. Technological advances that have changed sampling meth- ods include: the introduction of telephone surveying, which eventually replaced face-to-face interviews as the primary mode of household surveying; the shift from landline tele- phones to cell phones; and the advent of the Internet. Each of these not only affected methods of sampling, it is intertwined with the others, with changes in one leading to new develop- ments in the others. New Concerns with Traditional Quantitative Research The science of traditional quantitative market research rests on two fundamental assumptions: (1) the people who are surveyed approximate a random probability sample; and (2) the people who are surveyed are able and willing to answer the questions they are asked (Poynter 2010). In addition to these concerns with the theoretical underpinnings of market research, operational concerns are also putting pressure on traditional methodologies. Random Probability Sample The random probability sample is an essential ingredient of traditional market research. Without it, quantitative market This literature review consists of four sections: • The first section provides a brief history of survey sam- pling and the theoretical basis for market research anal- ysis, providing context for what became the standard procedures and expectations of market research. This is followed by an overview of some of the issues facing market research today, and how they are impacting the statistical underpinning of market analysis. • The second section introduces traditional panel surveys and panel survey techniques. A summary of a typical tra- ditional case example, the Puget Sound Transportation Panel Survey, is provided in Appendix A. • The third section introduces relatively newer concepts of online panel research, with definitions particular to online panel surveys and techniques, issues with online panel research, and the special concerns of market research in the public sector. • The fourth section circles back to the concerns raised in the first section, and looks at what lies ahead for the market research industry on these issues. MARKET RESEARCH CONTEXT A Brief History of Survey Sampling In a 2011 special edition of the Public Opinion Quarterly, Brick provides an article on the future of survey sampling (Brick 2011). For the purposes of the article, survey sampling is defined as the methodology for identifying a set of obser- vations from a population and making inferences about the population from those observations. Prior to 1934, full enu- meration was considered necessary to understanding char- acteristics of a population; everyone needed to be contacted. Neyman, in his 1934 article “On the Two Different Aspects of the Representative Method: The Method of Stratified Sample and the Method of Purposive Selection,” planted the seeds that resulted in the overthrow of full enumeration and established the paradigm of survey sampling. The move to telephone surveying in the mid-20th century was another significant change in survey sampling methods. The pressures for timely and cost-efficient estimates were stimulants for change then, and are even more relevant today. The article by Brick draws from a 1987 article by Frankel and Frankel, “Fifty Years of Survey Sampling in the United States.” In the 1987 article, sampling is described as having two phases: (1) the establishment of the basic methods of prob- ability sampling; and (2) innovations in the basic methods to chapter two LITERATURE REVIEW

9 research studies and analysis cannot be conducted. It is this underpinning of sampling theory that allows the calculation of sampling and expressing confidence in the results, such as results being ±3% with a confidence interval of 95%. As tele- phones became standard in every household, random-digit- dial techniques for landline telephones became the foundation of probabilistic sampling, with a solid theoretical basis. Results for a survey conducted between January and June 2011 by the National Center for Health Statistics found that 31.6% of American homes had only cell phones, and that an additional 16.4% of homes received all or almost all calls on wireless telephones despite also having a landline telephone (Blumbery 2011). Expert disagree, but it has been suggested that if more than 30% of the population has a 0% chance of being selected, then a random probabilistic sample cannot be selected (Brick 2011). The implication is profound—that with the incidence of landline phones declining, random-digit-dial telephone survey- ing, the mainstay of traditional quantitative market research, no longer provides a probabilistic sample. Cell phones are considered unsuitable for random-digit-dialing, for a variety of reasons, including the possibility of respondents having more than one cell phone resulting in duplication within the sample; respondent resistance; and legislation that prohibits the practice. Online recruitment is fast and economical, but does not provide a probabilistic sample, as is discussed later in this chapter. Willingness and Ability to Answer Research Questions Market researchers started out assuming that people could answer direct questions about their attitudes and behavior. Early on, it became clear that these questions were difficult to answer, so psychometrics and marketing science method- ologies were developed to facilitate responses and analysis of results. More recently even these techniques have been chal- lenged, as the industry realizes that respondents are unreliable witnesses about themselves. Operational Issues Other problems with the traditional market research process are operational. It is perceived as slow and costly, and increas- ingly, organizations are relying on techniques that may be providing quick results at the expense of quality. The cost of traditional survey research often is driven up by the decline in use of landline telephones, making it more difficult to obtain a traditional random probabilistic sample. In addition, more people have answering machines to screen calls or otherwise refuse to participate, again making it difficult to achieve a suf- ficient sample without additional time and expense. If cost is arguably the single most important factor in the search for new survey techniques, the Internet offers a potential solution, even though it doesn’t provide a probabilistic sample. This shift from probability sampling to non-probability sam- pling is a paradigm change of the magnitude of the shift from enumeration to probability sampling theory in 1934—which, Brick notes in his article, was spurred by the cost of enumera- tion. Today researchers find themselves in a similar situation, driven by rising costs away from probabilistic sampling toward non-probabilistic sampling. The issues raised here may fundamentally change the way all market research is conducted. How the market research indus- try is responding and what may lie ahead is discussed in the last section of this chapter, “The Future of Market Research.” TRADITIONAL MARKET RESEARCH PANELS Panel surveys have been conducted for many years, and have been used in the transportation industry for topics such as travel behavior changes and tracking customer satisfaction. The concepts discussed in this section are applicable to both traditional and online panel research. Definition of a Traditional Panel The meaning of a market research panel depends on the con- text, industry, and time period in which the term is being used. The AMA acknowledges this with the following distinctions: • True panel: A sample of respondents who are measured repeatedly over time with respect to the same variables. • Omnibus panel: A sample of respondents who are mea sured repeatedly over time but on variables that change from measurement to measurement [http:// www.marketing power.com/_layouts/Dictionary. aspx?dLetter=P (accessed Mar. 11, 2012)]. Traditional Panel Survey Techniques Traditional panel members were recruited through probabilis- tic sampling techniques so that survey results could be extra p- olated to the general population. Developing and maintaining a panel was an expensive proposition, made more difficult by the challenge of keeping track of people and households as they moved and changed phone numbers. Panel survey research was typically used to determine individual travel behavior changes over time, such as to understand the relation- ship between changes in household characteristics and choice of travel mode. Another use was for “before and after” studies to measure impacts of a change in policy or service; for exam- ple, adding a new light rail line or carpool lanes. These studies were often conducted by a MPO for the purpose of developing regional travel demand and forecasting models. Panels were rarely set up and maintained for the purpose of ad hoc, on-call market research (omnibus panels). Panel data collection is described as “a survey of a group of preselected respondents who agree to be panel members on a continuous basis for a given period of time and provide demographic data, allowing selection of special groups and

10 cally, each wave consists of the same core questions along with some new questions. In a travel behavior survey, the panel provides information on how the travel behavior of each participant evolves in response to changes in the travel environment, household background, or other factors. Rotating or revolving panel surveys are a combination of repeated and cross-sectional designs, in that they collect panel data on the same sample for a specified number of waves, after which portions of the panel are dropped and replaced with comparable members. The strength of this design is its abil- ity to allow for both short-term panel member analysis and long-term analysis of population and subgroup change. Like repeated cross-sectional designs, rotating panels periodically draw new members from the current population, obtaining similar measurements on them. Benefits of Panel Designs The most important benefit of a panel survey is that it directly measures changes at the individual level and can provide repeated measurements over time. This rich source of infor- mation on personal and household behavior is essential for determining causal relationships between travel behavior and the factors that influence personal travel decisions and developing predictive models for personal travel behavior. This same benefit applies to the ability to measure and under- stand trends in population behavior. Panel studies can be especially useful for before-and-after surveys that measure the impacts of transportation policy and service changes on travel behavior, rider attitudes, and safety. For example, a before-and-after study of the implementation of a new rail line (replacing existing bus service) shows that a shift in mode split occurred after the implementation of the new line. Results using a cross-sectional survey showed a shift from auto to train after opening of the rail line suggesting overall growth in transit use shifting car drivers to rail riders. A panel study measuring individual specific changes captured a shift from bus to car in addition to the shift from car to rail. This finding fundamentally changed the implications of the cross- sectional study: the new service attracted former car drivers, but also shifted former bus riders into cars. Additional benefits of the panel approach include statisti- cal efficiency (it requires a smaller sample size); lower cost (it requires fewer surveys); and speed (easy access to the panel allows faster survey implementation than when a fresh sample must be obtained). Limitations with Panel Designs Three primary limitations of panel surveys are identified: panel attrition, time-in-sample effects, and seam effects. 1. Panel attrition refers to panel member non-response in later waves of data collection. The Puget Sound Trans- portation Panel conducted its first wave of surveys in permitting the use of surveys to monitor responses over time” (Elmore-Yalch 1998). This maximizes the use of a sample in that the sampling need be done only once, after which the panel is accessible for future research efforts. Panel member attrition and replacement is an element of maintaining the panel, and is discussed elsewhere in this chapter. The remainder of this section is a summary of An Introduc- tion to Panel Surveys in Transportation Studies (Tourangeau et al. 1997), which provides a solid overview of the basics of traditional panel survey research, especially as applied to travel behavior studies. The report has a four-fold purpose for the development and implementation of travel behavior studies: (1) to highlight the differences between cross-sectional and panel surveys; (2) to discuss the limitations of both cross-sectional and panel sur- veys; (3) to identify situations where panel surveys are the preferred method; and (4) to provide guidelines for design- ing a panel survey and maintaining the panel. A panel survey approach is recommended when the purpose of the survey is to develop travel demand models and forecast future demand; to measure and understand trends in behavior; to assess the impact of a change in transportation policies or services; or to collect timely information on emerging travel issues. Definition of Cross-Sectional and Panel Designs There are two broad types of surveys, cross-sectional and panel surveys. A cross-sectional survey uses a fresh sample each time, whereas a panel survey samples the same persons (or households) over time. In addition, the questions may be the same or change with each survey. This creates four basic approaches to travel behavior surveys. One-time cross-sectional surveys provide a “snapshot” of travel behavior at a particular point in time, and show how behavior differs among members of the population, but pro- vide no direct information on how it changes over time. This type of survey makes no attempt to replicate conditions or questions from previous studies, and as a result is not well suited for assessing trends in population behavior. Repeated cross-sectional surveys measure travel behav- ior by repeating the same survey on two or more occasions. In addition to repeating the questions, the sampling is conducted in a similar manner to allow comparisons between or among separate survey efforts. Repeated cross-sectional surveys are sometimes referred to as a “longitudinal survey design” because they measure variations in the population over time. A more restrictive definition of a longitudinal survey design is where survey questions are repeated with the same sample over time. Longitudinal panel designs collect information on the same set of variables from the same sample members at two or more points in time. Each time the panel is surveyed, it provides what is called a “wave” of data collection. Typi-

11 1989. The fourth round of surveying in 1993 had a par- ticipation rate from the original panel member of about 55%, meaning 45% of the panel had left and needed to be replaced. 2. The time-in-sample effect refers to reporting errors or bias as a result of participants remaining in the panel over time. This is also called condition, rotation bias, or panel fatigue; and generally refers to respondents reporting fewer trips or fewer purchases in later rounds of a panel survey than in earlier ones. 3. Seam effects are another type of reporting error and refer to reporting changes at the beginning or ending of the interval between rounds rather than in other times covered by the interview. Design Issues in Conducting a Panel Survey There are four design issues that need to be considered in conducting a panel survey: definition of the sample unit; the number and spacing of rounds; method of data collection; and sample size. 1. Most traditional travel surveys conducted by MPOs use households as the sampling unit; however, sampling individuals is another option. When a household is the sampling unit, the panel survey sample can become com- plicated as household members are born, die, divorce, or mature and move out. For travel surveys, the report sug- gests using the household as the sampling unit, follow- ing initial respondents to new households, and adding any additional household members to the panel. 2. The number and spacing of survey rounds depends on factors such as the rate of changes in travel behavior and the need for up-to-date information. If changes in travel behavior are the result of external factors, such as rap- idly increasing gas prices, or if administrative reporting requires monthly or quarterly updates, this may shorten the intervals between survey waves. Panel travel surveys are collected at six-month or annual intervals, balancing the potential for respondent burden with the desire for regular data collection. The report recommends annual data collection for travel behavior studies. 3. Data collection methods differ in terms of cost, cover- age of the population, response rates, and data quality (inconsistent or missing data). In-person data collec- tion is typically the most expensive, but produces the highest percentage of coverage, highest response rates, and potentially most accurate data, as the interviewer can assist the respondent. Telephone data collection tends to be the next most expensive methodology, and eliminates the population without a telephone. This used to be limited since almost all households had a landline phone, but since the report was written the per- centage of mobile phone-only households has grown significantly. Data collection by mail is the cheapest of the three traditional modes, but has the lowest response rates and poorest data quality. [Since the report was written, Internet surveying has become another inex- pensive alternative method of data collection. Online surveying is covered in other portions of the literature review.] The report recommends using the telephone for data collection in the first wave of a travel behavior panel study and considering less expensive methods for successive waves, if necessary. 4. Selecting the sample size requires specifying the desired level of precision for the survey estimates. The preci- sion level is determined by the requirements for ana- lyzing the goals and objectives of the survey, typically rates of change in travel behavior at the household or sub-regional level. After the level of precision is deter- mined, traditional statistical formulas can be applied to determine the sample size, which is then adjusted for anticipated non-response, attrition, and eligibility rates. Issues with Maintaining the Panel The report points up three issues that need to be considered in terms of maintaining a panel: freshening the sample, main- taining high response rates across waves, and modifying the questionnaires across rounds. 1. “Freshening the sample” is the process of adding new panel members over time to ensure that the sample accurately reflects changes in the population from newly formed households or those who have recently moved to the study area. The longer the panel is con- tinued, the less likely it is to represent the study area. The report suggests that, if a panel continues for more than five years or there is significant in-migration to the study area, a supplemental sample be implemented. Another reason for freshening the sample is to off- set attrition, recruiting new panel members compa- rable to those who drop out and thereby maintaining the panel make-up and sample size for the duration of the panel effort. The report suggests that the initial sample size be large enough to accommodate antici- pated attrition in later waves, and that steps are taken to minimize attrition. Replacement of panel members should only be done as a last resort. 2. There are three techniques for maintaining high response rates: tracing people or households who move; main- taining contact with panel members between rounds; and providing incentives for participation. Methods of tracing panel members who move include mailing a let- ter several months in advance of the next wave request- ing updated contact information; and asking the post office to provide new addresses rather than forwarding the mail, to ensure that the contact files get updated. If new contact information is not provided, researchers may attempt a manual search through existing databases. The report suggests that a protocol be developed at the outset of the survey effort to track respondents between waves and reduce attrition. Another way of reducing attrition is to maintain respondent interest and contact information between

12 waves by sending postcards, holiday greetings, and sur- vey results. Incentives such as small amounts of cash can also be helpful. Cross-sectional surveys have shown that a small prepaid incentive (for example, a $2 bill) is effective in increasing participation rates and reducing attrition. Unfortunately, there was limited research at the time as to the effect of incentives on panel surveys over time. It is noted that non-respondents in one wave may still participate in the next, so that only those who refuse to respond to more than one round of the study would be dropped from the panel. 3. A defining element of a traditional panel survey is the ability to administer the same questions to panel mem- bers over time, which is what provides the direct mea- surement of change that is so valuable to travel behavior studies. Two situations arise that may make it neces- sary to modify the questionnaire across waves. First, a new issue may arise that can be advantageously posed to the panel. This then becomes a cross-sectional sur- vey, where the data are collected once. If the question is repeated in later waves, it becomes part of the panel effort. Although this is easy, fast, and less expensive than conducting a separate study, it can add to respon- dent fatigue by making the questionnaire longer. For this reason, it is suggested that new questions be kept to a minimum. The second reason for changing a question that there is a problem with the question itself (e.g., it is poorly worded, yields unreliable results, or becomes irrelevant). In this instance, it is important to revise the question as soon as possible. The report recommends that a calibration study be done to determine the effect of any core changes. Weighting the Panel Data The final section of the report deals with how to weight panel survey data. Weighting is done to account for differences in the probability of being selected, to compensate for differences in response rates across subgroups, and to adjust for random or systematic departures from the composition of the popula- tion. Weighting is done at two points: after the initial wave, following the procedures for standard cross-sectional surveys; and then after each wave to account for changes in the panel membership. Although weighting is fairly straightforward for the first wave, subsequent waves can be complicated if the sampling unit is a household, as is typical of travel behav- ior panel studies. Elements that must be taken into account are how to treat households who add or lose members over the course of the panel; and how to define a “responding” or “non-responding” household, for example, whether all survey waves are completed by all household members or only cer- tain household members. It is sometimes necessary to gener- ate different weights for different survey analyses. Detailed guidelines for developing panel survey weights are provided in the report appendices. ONLINE MARKET RESEARCH PANELS This section will discuss the types of online panels, sampling strategies, and issues and concerns with using the Internet for market research purposes. The current literature reviewed in this synthesis discusses sampling and recruitment for online panels using the Internet, e-mail, or other new technologies, such as quick response (QR) codes scanned by a smart phone. Multi-frame sampling, where a mix of sampling techniques is used for developing the panel, poses additional issues which are only now being explored and disseminated within the market research industry. Because this is an emerging area of research, this literature review does not include multi-frame sampling. Types of Online Panels Three types of panels are discussed by Poynter in his 2010 book, The Handbook of Online and Social Media Research: Tools and Techniques for Market Researchers. The first is a traditional panel, typically called a client panel or in-house panel, developed to meet specific criteria and recruited either in-house by the agency or through the assistance of a vendor. The panel can be recruited through a variety of techniques, including telephone; in-person intercepts (on a vehicle, or on the street); through existing agency customer databases; or online, through the agency website or pop-up invitations to join the panel. The critical elements of this type of panel are the definition and control that is exercised by the agency, and the intention for the agency to maintain the panel over time. An online access panel, also referred to as an access panel or online panel, is developed by independent market research firms and can provide samples for most markets that have a significant volume of research activity. The researcher provides the panel company with the desired sample speci- fication, and then either the researcher provides a link to the online survey, or the panel company scripts and hosts the online survey. The third type of panel survey is an online research com- munity, also known as a market research online community or MROC, which combines attributes of panel research with elements of a social media community. Although it is sometimes grouped with social media techniques, the online research community has been included here because it meets the definition of “a group of persons selected for the purpose of collecting data for the analysis of some aspect of the group or area.” In-House Panels As the name implies, in-house panels are owned by the research department of the agency, and are not purchased from a vendor’s existing panel. The in-house panel is used

13 for market research, not public relations, marketing or sales; and panel members are aware that they will be contacted for research, insight, and advice. The primary advantages of in-house panels are cost savings, speed of feedback, and control over the panel. Disadvantages include the workload required to manage a panel and that the possibility that panel members may become sensitized to the research approaches. In-house panels can be conducted simply from a list of people and an off-the-shelf survey program using e-mail and a way to unsubscribe from the panel. For small-budget projects or a low-key exploratory concept, a simple approach may be the most appropriate. More sophisticated panel management may require methods to prevent people from signing up mul- tiple times, the ability to draw sub-samples, protocols for han- dling and managing incentives, panel member replacement strategies, quotas on survey responses, online focus groups or bulletin board groups, and rules for creating an online panel community. The more sophisticated the approach, the more advanta- geous it is to contract with a vendor to run the panel. Using internal staff may make the research more responsive to man- agement needs while saving in consultant fees. A vendor, how- ever, can handle more work without overburdening agency staff, using employees familiar with the latest thinking and best practices. These different strengths often lead to a strong partnership between the vendor and staff. Traditionally, panel research was done with standard ques- tionnaires, implemented by means of mail or telephone. New developments in technology and the Internet have made it easy to expand the activities of a panel even further, creating online focus groups, photo projects where panel members take pictures with their cell phones and upload them to an agency website, brainstorming through collaborative systems such as “wiki” sites, and quick “fun polls” that encourage participa- tion, generate panel engagement, and provide almost instant answers to questions of the moment. Tips for using an in-house panel include: 1. Manage the expectations of panel members by letting them know at the outset how many surveys/activities they should expect. 2. Let panel members know you value their participation and that they are making a difference. 3. Recognize that panels will usually be skewed toward members who are knowledgeable about the product or service, and that they may not represent the opinion of the general public. 4. Complement conventional incentives (such as cash) with intrinsic rewards, such as information about upcoming events or new products before it hits the general market. Online Access Panels Online access panels have fundamentally changed how mar- ket research is conducted. An online access panel “is a col- lection of potential respondents who have signed up with a vendor which provides people for market research surveys.” These respondents are aware that they are joining a market research panel, and that they will be receiving invitations to online surveys. The vendor keeps some information on the panel members so that it can draw samples, if requested, but does not share this information with the client. Panel mainte- nance, including the provision of incentives, is the vendor’s responsibility. In selecting a panel vendor, six factors need to be considered: 1. Does the vendor provide only the sample, or will it also host surveys? If so, can the brand image on the survey maintain the agency’s brand, or does it become folded into the vendor’s survey branding? 2. What is the quality of the panel? Not all panels are created equal, and the results can vary based on the panel used. ESOMAR formulated “26 Questions” (later, “28 Questions”) for agencies to ask vendors in order to understand their procedures and the potential quality of the survey results. The questions can be found at: http://www.esomar.org/index.php/ 26-questions.html. 3. In looking at vendor costs, caution must be exercised to ensure that price quotes are on similar services so they can be correctly compared. 4. Make sure that the vendor has the capacity to complete the study, including any potential future waves of the study. It is common practice for panel survey vendors to outsource a portion of or even the entire project to another firm if they do not have the resources to com- plete it as scheduled. Outsourcing to another panel sur- vey firm can result in double-sampling people who are members of both panels. More importantly, because dif- ferent panels often have varying results, this can lead to confusion as to whether an apparent change is real or a reflection of the panel used. 5. The more data a vendor has on its panel members, the more closely a survey can be targeted to the appropriate respondents. This results in fewer respondents being screened out and a shorter survey with fewer necessary questions. 6. As with any service, it is helpful to have a supportive vendor who is willing to stay late if needed, help clean up errors, and respond quickly to issues and concerns. After selecting a vendor, it is essential to ensure a good working relationship. This can be facilitated by: • Clarifying the quote for the project to make sure it includes all work needed;

14 • Booking the fielding time for the job as soon as the ven- dor is selected so there is flexibility if dates need to be changed for holidays, computer maintenance, etc.; and • Developing and agreeing on the timeline, including final- izing the sample specification, scripting the survey or sending the link to the survey, having a soft launch to test the survey, agreeing on the full implementation and end date, and specifying the frequency of communica- tion with the panel company, especially regarding prob- lems that may occur. Once the survey is in the field, it is important to monitor progress and report any issues immediately to the panel ven- dor, including problems reaching the target quotas for com- pleted surveys. The sooner action is taken, the easier it will be to rectify the issue. It is advisable to work closely with the vendor supplying the panel to take advantage of its experi- ence with data issues with long surveys and improving the survey experience. Online Research Communities Using social media to create online research communities or MROCs for research purposes is a relatively new field. Research communities have been offered by third-party ven- dors since about 2000, but did not become widely used until about 2006. Online research communities typically have a few hundred members, and straddle the divide between quantita- tive and qualitative research. The communities can be short- term, developed for one research question and then dissolved; or can be a long-term resource, allowing research on a wide variety of topics over a period of six months or more. The benefits of online research communities are that they provide access to the authentic voice of the customer; go beyond the numbers to provide qualitative discussion; provide quick turnaround at a low marginal cost because the sampling and recruitment is complete; and create an active dialogue with the customers, letting them feel they “make a difference.” These communities can either be open to anyone who wishes to join (within the requirements of screening criteria, such as age or geographic location), or closed, in which case panel members are invited to participate. It is important to note that open communities tend to be more about involvement, advo- cacy, and transparency rather than insight and research. Incentives are important to maintaining a high level of par- ticipation for all types of research panels; however, several issues are to be considered when structuring an incentive pro- gram. (It should be noted that it is illegal for some public agen- cies to use incentives.) The argument for using incentives is that they represent a small payment for the time and contributions of the panel members, and may be necessary to obtain the level of engage- ment needed to make the community succeed. The type of incentive (cash versus intrinsic rewards) must also be con- sidered. A chance to win a transit pass or seeing the results immediately upon completing an instant poll are examples of incentives. Finally, the agency must decide how to allocate the incentives. Options include giving all members an incen- tive regardless of participation levels; giving members who participate in a specified time frame the incentive; offering a chance to win a prize; and awarding a prize to the “best” contribution in a specified time frame. Agencies should avoid starting with a high-value incentive, because lowering the incentive later will seem to panel members that the agency is taking away a benefit, resulting in a loss of participation. As with all research techniques, the online community can be developed and maintained either in-house or through a vendor. Online research communities require significant and continuous management. Even if the community is maintained by a vendor, significant input by staff is needed to ensure that the community is addressing issues of concern to the agency. The advantages of a having a research-only community are that it can be much smaller than broader-topic communities, and members may be more open if they know they will not be “sold to” another interest. Opening the community up to other department managers may result in too many surveys and e-mails being sent to members, with research being pushed aside in favor of other topics. Likewise, it is important not to allow community members to usurp the purpose of the research community for their own agendas. Part of managing the community is monitoring and ending any member activ- ity that begins to create an agenda separate from that of the agency, even removing a panel member if necessary. The steps to and guidelines for setting up an online com- munity include determining: • What type of community is best (short versus long term, open versus closed, and the number of members); • The “look and feel” (i.e., makeup) of the community; • Community tools; • Methods of recruiting members; • Terms and conditions (including intellectual prop- erty, member expectations, restricted activities, anti- community behavior, privacy and safety, incentive rules, eligibility, data protection and privacy), and the ability to change terms and conditions; • Methods of moderating and managing communities (moderator function, community plan, dealing with neg- ativity, creating member engagement); and • Requirements for finding and delivering insights. The rapid pace of change among social media makes it difficult to project how this type of research activity will be conducted in the future. Four considerations are identified in Poynton’s book: 1. Market research organizations typically do not allow activities that would influence the outcome of the

15 research. Because interaction and relationships built between community members and the sponsoring com- munity agency may sensitize panel members to organi- zational issues, MROCs may be declared “not research.” 2. Currently, online research communities are used for more qualitative work rather than large-scale quantita- tive work. The ability to expand online research to larger projects (e.g., international research) will increase this as a mainstream research tool. 3. Respondent fatigue may set in, resulting in a less engaged community. This may be especially true if panel mem- bers belong to more than one community. 4. Alternative (not research-based) methods may be more successful, such as having a very large community that can serve both marketing and research functions, or tap- ping into other existing communities to conduct research rather than establishing one specific to the organization. One of the primary concerns with online research commu- nities has been that the relationship with the organization may cause heightened brand awareness and affinity, and that this will lead to a positive bias in research results. However, Aus- tin notes in an article in Quirk’s Marketing Research Media (Austin 2012) that while engagement builds a relationship with the company, community members remain candid and critical despite their relationship with the brand. If anything, members became slightly more critical as their tenure length- ened, not less. The article recommends that in moving to a new research paradigm, organizations make two changes from the traditional research approach to take advantage of this finding: trade anonymity for transparency because transparency builds engagement; and trade distance for relationship because rela- tionship creates candor. Together, the community members “work harder, they share more and they stay engaged in the research longer.” Online Panel Sampling Techniques A few online panels employ traditional random sampling tech- niques, such as random-digit-dialing, and then conduct the research online; but the majority of panels are recruited using a non-probability approach online, such as pop-up or web ban- ner ads. The AAPOR Report on Online Panels (Baker 2010) covers both types of panels. This review will cover probability and non-probability sampling techniques as they relate to panels; it also discusses “river sampling,” although it is not a panel sampling technique, per se. Lastly, it provides an over- view of strategies for adjusting non-probability samples to represent a population. Probability sampling techniques for online survey research have been slow to be adopted, despite being around for more than 20 years. The recruitment is similar to voluntary, non- probabilistic samples, except that the initial contact is based on probabilistic sampling techniques such as random-digit- dialing, or other techniques for which the population is known. Computers may sometimes be provided to persons with no online access to remove bias that might exist from only includ- ing persons or households with Internet access. Once the sample is determined, panels are built and maintained in the same way, regardless of whether they are probability- or non- probability-based. A probability-based sample is more expen- sive to develop than a non-probabilistic sample. Consequently, systematic replacement or the replacement of panel members lost through attrition is also more costly. The benefit is that a panel can be built that represents the general population and allows analysis of results based on probability theory. Non-probability and volunteer online panel members are recruited through a variety of techniques, all of which involve self-selection. The invitations to join a panel can be delivered online (through pop-up or banner advertisements), in maga- zines, on television, or through any other medium where the target population is likely to see the advertisement. The recruit- ment entices respondents by offering an incentive, talking about the fun of taking surveys, or other proven techniques. A common practice in the industry for developing online panels is through co-registration agreements. An organization will compile e-mail lists of its website visitors and ask if they would like to receive offers from partner agencies. The e-mail list is then sold to a research panel company. Off-line recruit- ment strategies include purchasing an organization’s cus- tomer contact database and asking participants in a telephone survey if they would like to become part of an online panel for future surveys. A technique used for both online and off-line recruitment is to ask existing panel members to refer their friends and relatives, sometimes offering a reward for each new panel member recruited. No two panels are recruited the same way, and the panel research companies carefully guard their methodologies for recruiting panel members. River sampling is an online technique that uses pop-up sur- veys, banner ads, or other methods to attract survey respon- dents when they are needed. In river sampling, the ad presents a survey invitation to site visitors and then directs or “down- streams” them to another, unrelated website to complete the survey. (Using this analogy, a panel would be a pond or reser- voir sample.) Knowing on which websites to place the ads is critical to the success of river sampling. This technique is not related to developing a panel, although sometimes the respon- dent is invited to join a panel at the completion of the sur- vey. There is generally a reward of some kind for completing the survey, such as cash, online merchant gift cards, frequent flyer miles, etc. This type of sampling may be on the rise as researchers seek larger and more diverse sample pools, and to get respondents who are less frequently surveyed than those provided through online access panels. The AAPOR report provides an overview of strategies for adjusting self-selected (non-probability-based) online panels, and reviews complex weighting, quotas, benchmarking, and modeling methodologies for creating a more representative

16 sample. Complex weighting uses detailed information about the population to balance respondents so that they mirror the population. Quotas, which match key demographics of the respondents with the demographics of the target population, are the most common technique. Benchmarking keeps the sample specifications the same over multiple waves, under the assumption that any changes are the result of changes in the element being measured, regardless of whether the sample is representative of the population. Modeling refers to linking the benchmark results to the real world to model what a sur- vey score of X means in terms of actual outcomes. When applying statistical significance testing to the panel sample, it is important to recognize that the significance is not how representative it is of the population, but of the panel. “The error statistics indicate how likely it is that another sample from the same panel will be different, which is a valid and rele- vant measure of reliability” (Poynter, p. 74). It is not, however, an estimate of the population sampling error, as is commonly understood with traditional random (probabilistic) sampling. Response rates for online access panels have little impact on how representative the research is, but do provide a measure of the quality of the panel. Issues and Concerns with Online Panel Surveys: AAPOR Report on Online Panels Online surveys have grown rapidly because of the lower cost, faster turnaround time, and greater reliability in building tar- geted samples, at the same time that traditional survey research methods are plagued by increasing costs, higher non-response rates, and coverage concerns. The quality of online access panel survey data came into focus in 2006 when the VP of Proctor and Gamble’s Global Consumer Market Knowledge gave a presentation on the range of problems P&G had faced with online access panel reliability. It fielded a survey twice with the same panel, two weeks apart, with results that pointed to two different business conclusions. This focused the market research industry’s attention on the need to provide understand- ing, guidance, and research on the topic of online research. The traditional probabilistic sample, such as random-digit- dialing, is the underpinning of market research. Probabilistic samples are based on the probability of being selected out of a specified population (such as households within the city lim- its). Based on probability theory, the results can be projected to the population with a statistical level of certainty. Online panel surveys typically use non-probability samples, which are a significant departure from traditional methods. The AAPOR Report on Online Panels, produced by the AAPOR task force on opt-in online panels, is a seminal work on concerns and issues with online panel (i.e., non-probability sample) survey research. The scope was to “provide key infor- mation and recommendations about whether and when opt-in panels might be best utilized and how best to judge their quality” (Baker 2010). Sampling Error, Coverage Error, and Non-Response Bias A sample is, by definition, a subset of a population. All sur- veys, regardless of sampling method, have some level of imprecision owing to variation in the sample. This is known as sampling error. A probabilistic sample is one where sam- pling theory provides the probability by which the member of the sample is selected from the total population. In traditional sampling methods, such as random-digit-dialing of house- holds within a geographic area, the total population of home telephone numbers is known. With address-based sampling, the total number of addresses in a specific area is known. Thus the total population is known and the probability of selecting any one phone number (or address) is known. This allows the data to be projected to the population as a whole. The difficulty with online sampling is that the population is unknown. Typically an e-mail address is used as the sampling unit (rather than a home telephone, as in the earlier example). The issues with e-mail addresses include duplication problems, in that one person may have more than one e-mail address; and clustering problems, where an e-mail address represents more than one person. As a result, online sampling differs from tra- ditional sampling in three significant ways: (1) the concept of a sampling frame is discarded and the focus is shifted to recruit- ing as large and diverse a group as possible; (2) instead of a representative sample of all households, a diverse group of persons with the attributes of interest for the panel is recruited; (3) the panel membership is rarely rotated, with panel mem- bers being retained as long as they keep completing surveys. Over time, this can lead to a very different panel membership than the initial profile of the panel. Coverage error occurs when persons, or groups of persons, have zero chance of being selected to participate in the sur- vey. Lack of access to the Internet creates significant cover- age bias. The AAPOR report includes data from 2008 stating that although 85% of the households in the continental United States have some level of Internet service, those without Inter- net access differ significantly from those who do. Those with- out access are more than twice as likely to be over the age of 65 as the general population. They are also more likely to be members of a minority group, to have incomes less than $25,000, to have a high school education (or less), to be unem- ployed, not to own a home, and to live in rural counties or the South Census Region. It can also be noted that having access to the Internet does not necessarily make for active users of the Internet. In 1970, household telephone coverage estimates of 88% led to the acceptability of using telephone surveys in place of in person interviewing. Coverage estimates of Inter- net usage are currently lower than 88%, indicating that it has not yet reached a level where it can be used to represent the general population. Commercial online access panels are even more problem- atic, in that a person has to have Internet access, receive an invitation to become a panel member, sign up for the panel,

17 and then participate in the surveys. Current estimates are that less than 5% of the population has signed up for an online panel, meaning that more than 95% of the population has a 0% chance of being selected. Non-response bias is when some of the persons in the sam- ple choose not to respond to the survey, or some of the ques- tions within the survey. Four stages of panel development are discussed, and how online panel survey development is affected by non-response bias: Stage 1: Recruitment of panel members. The previous discussion on coverage error points out issues with Internet access. In addition, there is bias regarding which Internet users are likely to join a panel. The report cites several studies that found online panels are more likely to be comprised of white, active Internet users with high education levels who are considerably more involved in civic and political activities; and who place less importance on religion and traditional gender roles and more importance on environmental issues. Stage 2: Joining and profiling the respondents. Most panels require a respondent to click through from the online ad to the company’s website to register for the panel and complete some profile information, including an e-mail address. An e-mail is sent to the prospective panel mem- ber, who must respond in order to join the panel. A study by Alvarez et al. (2003) reported that just over 6% of those who clicked on the banner ad completed all of the steps to become a panel member. Stage 3: Completing the questionnaire. This is similar to random-digit dialing when a person refuses to par- ticipate in the survey or does not meet the eligibility requirements. Online surveys have an additional non- response bias from technical problems that can prevent delivery of the e-mail invitation or completion of the survey itself. Some panels will oversample groups that are known to have low response rates in order to have a representative sample after data collection is complete. Although this may result in a balanced sample on that particular dimension, it does not ensure that the sample is representative on other points. Stage 4: Panel maintenance. Attrition can be “normal,” when people opt out for whatever reasons; or can be forced, when panel members are automatically dropped from the panel after a set period of time to keep the panel fresh. Many strategies are used to reduce panel attrition, but little research exists on reducing or determining the most “desirable” attrition rate to balance the costs of adding panel members with the potential concerns of long-term membership, such as panel conditioning. Measurement Error Measurement error is defined as the difference between an observed response and the underlying true response. This can be random error, as when a respondent picks an answer other than the true response, without any systematic direc- tion in the choice made. Systematic measurement error, or bias, occurs when the responses are more often skewed in one direction. Much of the literature regarding measurement error is related to the benefits and potential biases of personal interviewers and self-administered surveys, including paper and online surveys. Because this is an issue that is related to data collection methodology for any survey, not specific to panel surveys, it is beyond the scope of this project and will not be covered in this literature review. However, this is an important issue for all survey efforts, and researchers are encouraged to look at the issues related to both interviewers and self-administered surveys. One measurement issue directly related to panel surveys is that of panel conditioning. Repeatedly taking surveys on a par- ticular topic is known to make respondents more aware of that topic, pay more attention to it in their daily lives, and therefore have different responses on future surveys than if they had not been on the panel. The research on panel conditioning with online panels has mixed findings. Some studies have shown a marked bias towards an increased likelihood to purchase; other studies show that this effect can be mitigated by vary- ing topics from survey to survey. Other research studies have shown no difference in attitudinal responses between infre- quent and very experienced panel survey members. There are two theories on the effects of taking large numbers of surveys: Experienced survey-takers may be more likely to answer in a way that they believe is best for themselves (e.g., it will earn them more incentives, or get more surveys to complete; alter- natively, experienced survey takers will understand the pro- cess better, resulting in more accurate and complete responses. So far, there is no definitive research on the effects of panel members completing large numbers of surveys regarding the accuracy of the survey results. Sample Adjustments to Reduce Error and Bias It is agreed by most researchers that online panels are not representative of the general population, and that techniques are needed to correct for this if the results are used. Four tech- niques have been used to attempt to correct for the known biases with a goal of making the sample representative of the population: sampling to represent a population; modeling; post-survey adjustment; and propensity weighting. 1. The most common form of sampling to represent a certain population is quota sampling, with the quotas often being demographics to match the census. Other elements can be factored in by, for example, balancing members by political affiliation. There does not appear to be any research on the reliability or validity of this type of sampling applied to panel surveys. 2. Models are frequently used in the physical sciences and in epidemiological studies to reduce error and bias. Online panels are much more complex than epidemio- logical studies, however, making it more difficult to apply model-based techniques.

18 3. The most common post-survey adjustment is the weight- ing of survey data. The difference between the sample and sampling frame with probability samples is handled through probability theory principles. Because there is rarely a sampling frame in an online sample, the census and other sources are typically used to adjust the results for under-representation of certain groups of respon- dents. Work conducted by Dever et al. (2008) found that inclusion of enough variables could eliminate coverage bias, but did not address problems associated with being a non-probability sample. 4. To apply propensity weighting, a second “reference” survey with a probability-based sample is conducted at the same time as the online panel survey, using the same questions. A model is built that can be used to weight future online surveys to better represent the target pop- ulation. Although this technique can be used success- fully, it can also increase other types of error, leading to erroneous conclusions from the resulting data. The AAPOR report (Baker 2010) provides an extensive discussion of and guidance on applying these techniques. The reader is encouraged to review the report before applying a sampling adjustment technique. Panel Data Quality Panel data cleaning is an important step in delivering results from respondents who are real, unique, and engaged in the sur- vey. Three areas of cleaning panel data are discussed: eliminat- ing fraudulent respondents, identifying duplicate respondents, and measuring engagement. Fraudulent respondents are those who sign up for a panel multiple times under false names and lie on the qualifying questionnaire to maximize their chances of participation. Duplicate responses occur when respondents answer the questionnaire more than once from the same invita- tion or when they are invited to complete the survey more than once because they belong to more than one panel. Measuring engagement is the most controversial technique. Four basic cleaning strategies are used to weed out respon- dents who may not be engaged with completing the survey, but are simply answering to earn the incentives: recognizing respondents with very short survey times (compared with all surveys); identifying respondents who answer all questions in a matrix format (usually scaled questions) the same way; recording an excessive selection of non-substantive answers, such as “don’t know”; and noting nonsense answers or identi- cal answers provided for all open-ended questions. Although there was no research at the time that demon- strated the effects of using cleaned data on the sample or final results, it is generally accepted that negative respondent behavior is detrimental to data quality. Industry Focus on Quality The market research industry has been focused on panel data quality, with virtually every national and international asso- ciation incorporating principles and guidelines for conducting online and panel research. Four key efforts are highlighted in the report: 1. The Council of American Survey Research Organiza- tion (CASRO) revised its Code of Standards and Eth- ics for Survey Research in 2007 to include specific clauses related to online panels. 2. ESOMAR developed comprehensive guidelines titled “Conducting Market and Opinion Research Using the Internet.” This was supplemented by its “26 Questions to Help Research Buyers of Online Samples.” 3. The International Organization for Standardization technical committee that developed ISO 20252— Market, Opinion and Social Research also developed ISO 26362—Access Panels in Market, Opinion and Social Research. The standard defines key terms and concepts in an attempt to create a common vocabu- lary for online panels, and details the specific kinds of information that a research panel is expected to make available to a client at the conclusion of every project. 4. The Advertising Research Foundation established the Online Research Quality Council, which in turn designed and executed the Foundations of Quality project. Work was in progress as of the writing of the AAPOR report, and as of the writing of this synthesis, results of the effort were just being made public. Recommendations The AAPOR Report on Online Panels makes the following recommendations to market researchers who are considering using online access panels: • A non-probability online panel is appropriate when precise estimates of population values are not required, such as when testing the receptivity to product concepts and features. • Avoid using non-probability online access panels when the research is to be used to estimate population values. There is no theoretical basis for making projections or estimates from this type of sample. • The accuracy of self-administered computer surveys is undermined because it is a non-probability sample. A random-digit-dial telephone survey is more accurate than an online survey because it is a probability sam- ple, despite the coverage error arising from households without a landline phone. • It has not yet been demonstrated that weighting the results from online access panel surveys is consistently effective and can be used to adjust for panel bias.

19 • There are significant differences in the composition and practices of various online access panels, which can affect survey results. Different panels may yield significantly different results on the same questionnaire. Market Research by the Public Sector Poynter’s book devotes a section to issues specific to pub- lic sector research. Although most of the marketing research principles apply equally to the private and public sectors, there are a few areas where the public sector researcher needs to be particularly attentive, because public funds are being used to conduct the research and the results may determine how public funds are expended. Areas for particular attention are identified as: operating in the public eye, “representativ- ity,” geographical limitations, social media and the public sector, and ethics. In the Public Eye Public sector research is subject to audit, inspection, and reporting in the media. Freedom of information laws ensure that the public has a right to see how public funds are being spent. Poorly conducted research could be brought to light in a public forum, creating public relations problems for a per- ceived waste of taxpayer money and jeopardizing the ability to conduct future research. As a result, care must be taken to ensure public sector research is conducted to the highest qual- ity and ethical standards. Representativity Having a representative sample is always important, but is of special concern for public agencies. Many public services, such as public transportation, target specific groups which may have multiple challenges. Much of the target population may not have Internet access, and those that do may not be typical of the market segment they are expected to represent. For each study, the researcher must carefully assess whether an online survey is appropriate for that market and research purpose, and whether the sampling and recruitment strategies provide survey results that can be defended in public. Geographical Limitations Public agencies have strict geographic boundaries from which a sample population can be drawn. Face-to-face or telephone surveys are often simplified by these restrictions. Surveys using an online access panel, however, can be problematic, as there may not be an adequate sample of persons from the target area. This is further exacerbated when the sample is also required to be representative of the population within a speci- fied geographical area. Social Media and the Public Sector There are several ways in which social media are being used for research in the public sector. Online communities engage in a range of activities, including information-sharing, research, community-building, and engagement. Online research com- munities are typically closed communities, operated by a ven- dor, with membership by invitation only as part of an overall sampling plan (see the MnDOT case example of an online research community). Twitter, blogs, and public discussions are resources for passive research, using data mining tools to monitor trends in what people are saying about the agency. Although useful information can be elicited from these sources, it should be noted that they do not provide a representative sample, and should be considered public comment rather than research. Social media is often used to reach out to groups that are otherwise hard to reach, such as young adults. It should be noted that using a variety of social media techniques, such as Facebook, YouTube, and Twitter, is likely to reach the same people multiple times. If multiple social media channels are used to recruit online survey participants, for example, the researcher must be prepared for the potential duplication of survey responses. Ethics There is an expectation that research will be reliable, and can be used by a decision-making body in a public forum. First and foremost, the researcher must provide unbiased market research. Often a vendor is used to conduct the research so as to provide a wall between the agency and the research and avoid the appearance of leading the respondents, or “spinning” the results. The second concern is that quantitative research based on random probability sampling has been the standard method for achieving that level of reliability expected of a pub- lic agency. Since online research is typically not from a proba- bilistic sample, the researcher should recognize the potential lack of statistical reliability inherent in the research design and ensure that decision makers understand the limitations of the data. FUTURE OF MARKET RESEARCH Technology has fundamentally changed how society com- municates and how it does business. Whereas people used to communicate by means of the telephone at home, cell phones make communication possible virtually anywhere. Cell phone numbers do not represent a physical address; they have become a moving, real-time “personal” address. The Internet provides instant access to information and communication through e-mail, websites and social media. The smart phone combines mobile communication with the Internet, creating

20 a completely new, technology-based world. Panels can now be developed online, quickly and easily. Household and per- sonal contact information is no longer tied to a home address, but exists outside of the person’s geographic location. This technology has led to a revolution in market research. Recruiting survey respondents is easier; developing a panel is faster; and surveys are online, resulting in automation of survey tabulation and reporting. As a result, recruiting and maintaining research panels is simpler, less expensive, and very attractive to decision makers who want results “now.” But these changes have created a myriad of concerns, pri- marily related to using non-probabilistic sampling practices. The history of sampling theory provides some insights into what may occur in the future. In 1934, although sam- pling theory had not yet been developed, Anders Kaier con- vinced an international audience that representative samples could be used to represent a population. Morris Hansen of the U.S. Census Bureau greatly expanded the theory and prac- tice of sampling and helped convince the bureau to accept sampling and quality control methods in the 1940 Census. Through the leadership of these two important individuals, the practice was adopted. From there, sampling theory was developed—it followed the practice, rather than the theory creating the practice. Brick states that data collection costs will continue to put pressure on agencies to use non-probability samples from online recruitment. If this cannot be done within design-based probability sampling theory, he suggests two potential out- comes: A new paradigm that accommodates online surveys is introduced, which replaces or supplements traditional prob- ability sampling; or online surveys using non-probabilistic sampling are restricted to specific applications as a result of the weak theoretical basis. One potential solution is the use of multiple-frame sam- pling to reduce coverage error, a fundamental concern with online panel research. For example, to reach transit riders, an online survey could be placed on the agency website and be supplemented with paper surveys on board vehicles. Stat- isticians are working on establishing a theoretical basis for conducting sampling using the multiple-frame technique (Brick 2011). In addition to the changes in survey practice that led to the historical development of sampling theory, two additional factors are cited as creating the paradigm shift from popula- tion surveying to representative sampling in 1934. The first was the wealth of scientific development and statistical ideas, not necessarily related to survey sampling, which neverthe- less supported the growth and change in methods. The sec- ond factor was society’s demand for information on a wide range of topics that made population sampling cumbersome and expensive. This desire for faster, cheaper research drove the development of probability sampling and our current market research paradigm. These characteristics are in place today, almost 80 years after probabilistic sampling made its debut. With the rapid changes in technology and society’s insatiable thirst for more information, more quickly, and for less cost, a new research paradigm with a theoretical founda- tion to support non-probabilistic online surveying may be on the horizon.

Next: Chapter Three - Survey Results Use of Panels »
Use of Market Research Panels in Transit Get This Book
×
 Use of Market Research Panels in Transit
Buy Paperback | $48.00
MyNAP members save 10% online.
Login or Register to save!
Download Free PDF

TRB’s Transit Cooperative Research Program (TCRP) Synthesis 105: Use of Market Research Panels in Transit describes the various types of market research panels, identifies issues that researchers should be aware of when engaging in market research and panel surveys, and provides examples of successful market research panel programs.

The report also provides information about common pitfalls to be avoided and successful techniques that may help maximize research dollars without jeopardizing the quality of the data or validity of the results.

READ FREE ONLINE

  1. ×

    Welcome to OpenBook!

    You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

    Do you want to take a quick tour of the OpenBook's features?

    No Thanks Take a Tour »
  2. ×

    Show this book's table of contents, where you can jump to any chapter by name.

    « Back Next »
  3. ×

    ...or use these buttons to go back to the previous chapter or skip to the next one.

    « Back Next »
  4. ×

    Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

    « Back Next »
  5. ×

    To search the entire text of this book, type in your search term here and press Enter.

    « Back Next »
  6. ×

    Share a link to this book page on your preferred social network or via email.

    « Back Next »
  7. ×

    View our suggested citation for this chapter.

    « Back Next »
  8. ×

    Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

    « Back Next »
Stay Connected!