• Surveys with repetitive questions yield

    From ScienceDaily@1:317/3 to All on Fri Jan 28 21:30:36 2022
    Surveys with repetitive questions yield bad data, study finds

    Date:
    January 28, 2022
    Source:
    University of California - Riverside
    Summary:
    Surveys that ask too many of the same type of question tire
    respondents and return unreliable data, according to a new
    study. The study found that people tire from questions that vary
    only slightly and tend to give similar answers to all questions as
    the survey progresses. Marketers, policymakers, and researchers
    who rely on long surveys to predict consumer or voter behavior
    will have more accurate data if they craft surveys designed to
    elicit reliable, original answers, the researchers suggest.



    FULL STORY ========================================================================== Surveys that ask too many of the same type of question tire respondents
    and return unreliable data, according to a new UC Riverside-led study.


    ==========================================================================
    The study found that people tire from questions that vary only slightly
    and tend to give similar answers to all questions as the survey
    progresses.

    Marketers, policymakers, and researchers who rely on long surveys to
    predict consumer or voter behavior will have more accurate data if
    they craft surveys designed to elicit reliable, original answers, the researchers suggest.

    "We wanted to know, is gathering more data in surveys always better,
    or could asking too many questions lead to respondents providing less
    useful responses as they adapt to the survey," said first author Ye Li, a
    UC Riverside assistant professor of management. "Could this paradoxically
    lead to asking more questions but getting worse results?" While it may be tempting to assume more data is always better, the authors wondered if the decision processes respondents use to answer a series of questions might change, especially when those questions use a similar, repetitive format.

    The research addressed quantitative surveys of the sort typically used
    in market research, economics, or public policy research that seek to understand people's values about certain things. These surveys often
    ask a large number of structurally similar questions.

    Researchers analyzed four experiments that asked respondents to answer questions involving choice and preference.



    ========================================================================== Respondents in the surveys adapted their decision making as they answer
    more repetitive, similarly structured choice questions, a process the
    authors call "adaptation." This means they processed less information,
    learned to weigh certain attributes more heavily, or adopted mental
    shortcuts for combining attributes.

    In one of the studies, respondents were asked about their preferences
    for varying configurations of laptops. They were the sort of questions marketers use to determine if customers are willing to sacrifice a bit
    of screen size in return for increased storage capacity, for example.

    "When you're asked questions over and over about laptop configurations
    that vary only slightly, the first two or three times you look at them carefully but after that maybe you just look at one attribute, such as
    how long the battery lasts. We use shortcuts. Using shortcuts gives you
    less information if you ask for too much information," said Li.

    While humans are known to adapt to their environment, most methods in behavioral research used to measure preferences have underappreciated
    this fact.

    "In as few as six or eight questions people are already answering in
    such a way that you're already worse off if you're trying to predict
    real-world behavior," said Li. "In these surveys if you keep giving people
    the same types of questions over and over, they start to give the same
    kinds of answers." The findings suggest some tactics that can increase
    the validity of data while also saving time and money. Process-tracing,
    a research methodology that tracks not just the quantity of observations
    but also their quality, can be used to diagnose adaptation, helping to
    identify when it is a threat to validity.

    Adaptation could also be reduced or delayed by repeatedly changing the
    format of the task or adding filler questions or breaks. Finally, the
    research suggests that to maximize the validity of preference measurement surveys, researchers could use an ensemble of methods, preferably using multiple means of measurement, such as questions that involve choosing
    between options available at different times, matching questions, and
    a variety of contexts.

    "The tradeoff isn't always obvious. More data isn't always better. Be
    cognizant of the tradeoffs," said Li. "When your goal is to predict the
    real world, that's when it matters." Li was joined in the research by
    Antonia Krefeld-Schwalb, Eric J. Johnson, and Olivier Toubia at Columbia University; Daniel Wall at the University of Pennsylvania; and Daniel
    M. Bartels at the University of Chicago. The paper, "The more you ask,
    the less you get: When additional questions hurt external validity,"
    is published in the Journal of Marketing Research.

    special promotion Explore the latest scientific research on sleep and
    dreams in this free online course from New Scientist -- Sign_up_now_>>> ========================================================================== Story Source: Materials provided by
    University_of_California_-_Riverside. Original written by Holly
    Ober. Note: Content may be edited for style and length.


    ========================================================================== Journal Reference:
    1. Ye Li, Antonia Krefeld-Schwalb, Daniel G. Wall, Eric J. Johnson,
    Olivier
    Toubia, Daniel M. Bartels. EXPRESS: The More You Ask,
    the Less You Get: When Additional Questions Hurt External
    Validity. Journal of Marketing Research, 2021; 002224372110735 DOI:
    10.1177/00222437211073581 ==========================================================================

    Link to news story: https://www.sciencedaily.com/releases/2022/01/220128153553.htm

    --- up 7 weeks, 6 days, 7 hours, 13 minutes
    * Origin: -=> Castle Rock BBS <=- Now Husky HPT Powered! (1:317/3)