Skip to main content

Item Count Technique Estimators under Respondent Error

With the advent of cheap and reliable Internet survey panels we have seen a resurgence in the use of the survey list experiment for indirectly asking respondents about sensitive topics. We have also seen the introduction of new estimators for list experiment data, most notably the ICT regression model (Imai, 2011). This estimator promises to extract more from the data -- and more efficiently -- than traditional difference-in-means analysis but it leans heavily on assumptions about responses at the extremes (admitting to none or all items on the list) to identify the model. This presentation shows how measurement problems in these extreme responses can induce severe bias in parameter and population-prevalence estimates. I document how such problems may arise in practice and reports the results of Monte Carlo experiments examining the sensitivity of these estimators to both random and extreme-biased respondent error. I find that ICT regression is very sensitive to extreme-biased measurement error relative to simple t-tests and its generalizations whereas random measurement error makes little difference (other than a loss of effciency). This bias becomes more extreme as the underlying prevalence of the sensitive item decreases. I propose some simple solutions to this problem at the design stage including the randomization of the ordering of presentation of the possible responses (in addition to the items in the list), turning systematic into random measurement error but likely reducing the effciency of the list experiment.