Assessing the validity of stated preference data using follow-up questions
Autor
Myers, Kelley
MacNair, Doug
Tomasi, Ted
Schneider, Jude
Institución
Resumen
Stated preference (SP) studies such as contingent valuation (CV) and
discrete choice experiments (DCEs) are often used to attempt measurement of willingness to pay (WTP) for environmental goods. However,
concern exists that these methods do not provide data that can support
valid, reliable, and meaningful WTP estimates, especially in the context
of estimating non-use values for environmental goods. The foundation of
all survey-based exercises is that the questions as asked by the researcher
and answered by the respondent share a common understanding. This
common understanding is difficult to achieve. In WTP studies, additional criteria must be met if the results are to provide data for estimating Hicksian welfare measures.2 The criteria that must be satisfied if SP
data are to be theoretically interpreted via the standard microeconomic
rational choice model (RCM) have been widely discussed in the literature
(e.g., Mitchell and Carson, 1989; Carson and Groves, 2007; US EPA SAB,
2009; Carson and Louviere, 2011; Bateman, 2011). General consensus
exists on these criteria: that the respondents believe the information in the
survey and base their responses solely on outcomes described in the survey, they treat the exercise posed in the survey as they would a real decision
that affects their budget, and they answer valuation questions as rational
economic agents with well-defined preferences who are trading money for
economic goods.
One approach for assessing whether respondents satisfy these criteria is
to use follow-up, debriefing questions. The earliest and most ubiquitous
follow-up questions were “Yes/No” follow-ups based on recommendations
from the National Oceanic and Atmospheric Administration (NOAA)
Blue Ribbon Panel on contingent valuation. As part of a review of the use
of contingent valuation to estimate lost non-use values in the context of
natural resource damage assessments (NRDAs), the NOAA panel recommended the use of “Yes/No” follow-ups to determine the type of response
(i.e., protest vote, yea-saying, etc.). However, the scope of follow-up questions has expanded over time (Krupnick and Adamowicz, 2006 provide
a discussion). These questions may be used to “shore up the credibility
of the survey” (ibid.), “to modify the estimate derived from one or more
SP questions in some way” (Carson and Louviere, 2011), or to identify
“problematic responses” in order to delete some responses or respondents
or treat them as zeros for analysis purposes.
Despite their ubiquity, there is little consistency to either the questions posed or their use to modify analyses. First, no consensus exists
on what or how many questions to ask in order to identify problematic
responses. Second, most studies report results by question rather than by
respondent; thus, the literature does not evaluate how many respondents
had a general understanding of the tasks asked of them. Third, other
than those respondents who protest the SP exercise as a whole and typically are dropped from the analysis sample, no consensus exists on what
to do about problematic answers. This lack of consistency in the use of
follow-up questions is troubling, as substantial proportions of respondents may give problematic answers to some of the follow-up questions
and welfare estimates may be sensitive to decisions made regarding such
answers.