Posts Tagged ‘ AAPOR

Only for the Young at Heart: Co-Viewing on Mobile Devices and Viewing on the Go?

Abstract:

With the relative ease and accessibility of a variety of content available to users of smartphones and tablets, there has been a subtle behavioral change in how people use these devices. The concept of viewing together or having more than one viewer for a mobile device is a phenomenon referred to as “co-viewing” and is a new area that warrants further investigation. Very little information is available on who is likely to engage in co-viewing behaviors, what types of mobile devices are used, what content is likely to be viewed and if those who engage in this activity / behavior are fundamentally different than those who are less likely—what are the behavioral or demographic differences among those who participate in these activities. Thus the focus here is to examine and provide a baseline understanding around the concept of co-viewing with specific focus of content viewing on the “go” or away from home. Read more

Who Is behind That Screen? Solving the Puzzle of Within-Home Computer Sharing among Household Members

Abstract:

The number of US households with access to computers at home has continued to grow. According to the 2011 Computer and Internet Use report published by US department of Commerce, 77% of US homes have computers in their home, compared to 62% in 2003. Many households, however, do not have multiple computers dedicated to each member living in the house. As such, sharing of computers amongst household members can be a prevalent phenomenon in home computer usage. Understanding this within-house computer sharing phenomenon and identifying the mostly likely person behind the computer screen can be of interest to market researchers and practitioners, particularly those interested in studying effective ways to target online ads based on users, online activities. For survey researchers who are attempting to recruit hard-to-reach individuals like teens and young adults, understanding of computer sharing could help establish contact at times when those individuals are more likely to be behind the computer. Despite its prevalence, within-house computer sharing has barely received any research attention. This study hopes to break through the barriers preventing the light of scientific inquiry into this phenomenon. Read more

Is It Too Much to Ask? The Role of Question Difficulty in Survey Response Accuracy for Measures of Online Behavior

Abstract:

While market research capabilities of online panels have never been greater, the challenges facing these panels in many ways are just as great. Over the past few years, online panels that recruit members using nonprobability/opt-in based methods have come under increased scrutiny and criticism over data quality concerns such as respondent identity and increased satisficing. These concerns have drawn attention to the heart of the issue, which is: the accuracy or truthfulness of data provided by opt-in panel respondents. This issue is of utmost importance given the recently established link between opt-in panel sample and poor survey data quality (see Yeager et. al. 2011). Read more

Evaluation of Alternative Weighting Approaches to Reduce Nonresponse Bias

Abstract:

With declining response rates, surveys increasingly rely on weighting adjustments to correct for potential nonresponse bias. The resulting increased need to improve survey weights faces two key challenges. First, additional auxiliary data are needed to augment the models used to estimate the weights. Depending on the properties of these auxiliary data, nonresponse bias can be reduced, left the same, or even increased. Thus, the second challenge is to be able to evaluate the alternative weights, when the assumption of “different estimates means less bias” may not hold. Ideally, data need to be collected from as many nonrespondents as possible to provide direct estimates of nonresponse bias. Read more

Difficult Data: Comparing the Quality of Behavioral, Recall, and Proxy Data Across Survey Modes

Abstract:

The mode choice literature is rife with evidence on the impact of different survey modes on response rates, respondent cooperation, and data quality. However, insufficient attention has been paid to the quality of “difficult data” provided when respondents cannot choose the mode and thus cannot maximize their comfort with the survey. Here, “difficult data” correspond to questions that are burdensome for respondents to think about – e.g., very specific details on a behavior, on past events, or on the behavior of other persons. Read more

Recruitment and Retention in Multi-Mode Survey Panels

Abstract:

This study builds on a previously published panel recruitment experiment (Rao, Kaminska, and McCutcheon 2010), extending that analysis to an examination of the effectiveness of pre-recruitment factors such as mode and response inducements on three post-recruitment panel participation effects: attrition rates, survey completion rates, and panel data quality. The panel recruitment experiment, conducted with the Gallup Panel, netted 1,282 households with 2,042 panel members. For these recruited members, we collected data on panel participation and retention, and use it for analysis in this study. Read more

Is Past, the Future? Resampling Past Respondents to Improve Current Response Rates

Abstract:

The Nielsen TV Ratings Diary service involves the use of a one-week TV diary survey for measuring TV ratings. While the service has been around for a while, it recently received a sampling makeover to address the diminishing coverage associated with landline random-digit dialing (RDD) surveys. Address-based sampling (ABS) replaced RDD as the sampling methodology for the diary service. Read more

Home or Work or Both? Assessing the Role of Duplication of Website Visitations Using a Online Metered Panel

Abstract:

In this study, for multiple websites, we estimate duplicated audience reach between home and work Internet access locations. By employing a probability-based matching technique, we use metered-panel based data from non-overlapping home and work panels for creating a virtual overlapping home-work panel. Read more

How Can We Believe What They Say? The Role of Missing and Validating Data in Panelists Demographic Information

Abstract:

The use of online panels (probability-based or volunteer opt-in) as a mode of data collection has become increasingly popular in market, social, psychological, and medical research (Callegaro and DiSogra 2009). The Nielsen online panel is one of the opt-in panels in the United States that is composed of respondents who voluntarily sign up (opt-in) to become members of the panel. Read more

Are you who you say you are? Using a Multisource Cross-validation Methodology for Panel Membership Information

Abstract:

Over the past few years, market researchers and clients working with data from non-probability online panels have voiced a number of data quality concerns over issues such as respondent identity, increased satisficing, and possible professionalization of survey taking. Recognizing the importance of having real, unique, and engaged panelists, panel companies are responding to these issues by introducing a variety of remedial measures such as name-address verification, email address verification, and validation of key demographic information against third-party databases. Read more